text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 650–658, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Comparable Entity Mining from Comparative Questions Shasha Li1,Chin-Yew Lin2,Young-In Song2,Zhoujun Li3 1National University of Defense Technology, Changsha, China 2Microsoft Research Asia, Beijing, China 3Beihang University, Beijing, China [email protected], {cyl,yosong}@microsoft.com2, [email protected] Abstract Comparing one thing with another is a typical part of human decision making process. However, it is not always easy to know what to compare and what are the alternatives. To address this difficulty, we present a novel way to automatically mine comparable entities from comparative questions that users posted online. To ensure high precision and high recall, we develop a weakly-supervised bootstrapping method for comparative question identification and comparable entity extraction by leveraging a large online question archive. The experimental results show our method achieves F1measure of 82.5% in comparative question identification and 83.3% in comparable entity extraction. Both significantly outperform an existing state-of-the-art method. 1 Introduction Comparing alternative options is one essential step in decision-making that we carry out every day. For example, if someone is interested in certain products such as digital cameras, he or she would want to know what the alternatives are and compare different cameras before making a purchase. This type of comparison activity is very common in our daily life but requires high knowledge skill. Magazines such as Consumer Reports and PC Magazine and online media such as CNet.com strive in providing editorial comparison content and surveys to satisfy this need. In the World Wide Web era, a comparison activity typically involves: search for relevant web pages containing information about the targeted products, find competing products, read reviews, and identify pros and cons. In this paper, we focus on finding a set of comparable entities given a user‟s input entity. For example, given an entity, Nokia N95 (a cellphone), we want to find comparable entities such as Nokia N82, iPhone and so on. In general, it is difficult to decide if two entities are comparable or not since people do compare apples and oranges for various reasons. For example, “Ford” and “BMW” might be comparable as “car manufacturers” or as “market segments that their products are targeting”, but we rarely see people comparing “Ford Focus” (car model) and “BMW 328i”. Things also get more complicated when an entity has several functionalities. For example, one might compare “iPhone” and “PSP” as “portable game player” while compare “iPhone” and “Nokia N95” as “mobile phone”. Fortunately, plenty of comparative questions are posted online, which provide evidences for what people want to compare, e.g. “Which to buy, iPod or iPhone?”. We call “iPod” and “iPhone” in this example as comparators. In this paper, we define comparative questions and comparators as:  Comparative question: A question that intends to compare two or more entities and it has to mention these entities explicitly in the question.  Comparator: An entity which is a target of comparison in a comparative question. According to these definitions, Q1 and Q2 below are not comparative questions while Q3 is. “iPod Touch” and “Zune HD” are comparators. Q1: “Which one is better?” Q2: “Is Lumix GH-1 the best camera?” Q3: “What‟s the difference between iPod Touch and Zune HD?” The goal of this work is mining comparators from comparative questions. The results would be very useful in helping users‟ exploration of 650 alternative choices by suggesting comparable entities based on other users‟ prior requests. To mine comparators from comparative questions, we first have to detect whether a question is comparative or not. According to our definition, a comparative question has to be a question with intent to compare at least two entities. Please note that a question containing at least two entities is not a comparative question if it does not have comparison intent. However, we observe that a question is very likely to be a comparative question if it contains at least two entities. We leverage this insight and develop a weakly supervised bootstrapping method to identify comparative questions and extract comparators simultaneously. To our best knowledge, this is the first attempt to specially address the problem on finding good comparators to support users‟ comparison activity. We are also the first to propose using comparative questions posted online that reflect what users truly care about as the medium from which we mine comparable entities. Our weakly supervised method achieves 82.5% F1-measure in comparative question identification, 83.3% in comparator extraction, and 76.8% in end-to-end comparative question identification and comparator extraction which outperform the most relevant state-of-the-art method by Jindal & Liu (2006b) significantly. The rest of this paper is organized as follows. The next section discusses previous works. Section 3 presents our weakly-supervised method for comparator mining. Section 4 reports the evaluations of our techniques, and we conclude the paper and discuss future work in Section 5. 2 Related Work 2.1 Overview In terms of discovering related items for an entity, our work is similar to the research on recommender systems, which recommend items to a user. Recommender systems mainly rely on similarities between items and/or their statistical correlations in user log data (Linden et al., 2003). For example, Amazon recommends products to its customers based on their own purchase histories, similar customers‟ purchase histories, and similarity between products. However, recommending an item is not equivalent to finding a comparable item. In the case of Amazon, the purpose of recommendation is to entice their customers to add more items to their shopping carts by suggesting similar or related items. While in the case of comparison, we would like to help users explore alternatives, i.e. helping them make a decision among comparable items. For example, it is reasonable to recommend “iPod speaker” or “iPod batteries” if a user is interested in “iPod”, but we would not compare them with “iPod”. However, items that are comparable with “iPod” such as “iPhone” or “PSP” which were found in comparative questions posted by users are difficult to be predicted simply based on item similarity between them. Although they are all music players, “iPhone” is mainly a mobile phone, and “PSP” is mainly a portable game device. They are similar but also different therefore beg comparison with each other. It is clear that comparator mining and item recommendation are related but not the same. Our work on comparator mining is related to the research on entity and relation extraction in information extraction (Cardie, 1997; Califf and Mooney, 1999; Soderland, 1999; Radev et al., 2002; Carreras et al., 2003). Specifically, the most relevant work is by Jindal and Liu (2006a and 2006b) on mining comparative sentences and relations. Their methods applied class sequential rules (CSR) (Chapter 2, Liu 2006) and label sequential rules (LSR) (Chapter 2, Liu 2006) learned from annotated corpora to identify comparative sentences and extract comparative relations respectively in the news and review domains. The same techniques can be applied to comparative question identification and comparator mining from questions. However, their methods typically can achieve high precision but suffer from low recall (Jindal and Liu, 2006b) (J&L). However, ensuring high recall is crucial in our intended application scenario where users can issue arbitrary queries. To address this problem, we develop a weakly-supervised bootstrapping pattern learning method by effectively leveraging unlabeled questions. Bootstrapping methods have been shown to be very effective in previous information extraction research (Riloff, 1996; Riloff and Jones, 1999; Ravichandran and Hovy, 2002; Mooney and Bunescu, 2005; Kozareva et al., 2008). Our work is similar to them in terms of methodology using bootstrapping technique to extract entities with a specific relation. However, our task is different from theirs in that it requires not only extracting entities (comparator extraction) but also ensuring that the entities are extracted from comparative questions (comparative question identification), which is generally not required in IE task. 651 2.2 Jindal & Liu 2006 In this subsection, we provide a brief summary of the comparative mining method proposed by Jindal and Liu (2006a and 2006b), which is used as baseline for comparison and represents the state-of-the-art in this area. We first introduce the definition of CSR and LSR rule used in their approach, and then describe their comparative mining method. Readers should refer to J&L‟s original papers for more details. CSR and LSR CSR is a classification rule. It maps a sequence pattern S(𝑠1𝑠2 … 𝑠𝑛) to a class C. In our problem, C is either comparative or non-comparative. Given a collection of sequences with class information, every CSR is associated to two parameters: support and confidence. Support is the proportion of sequences in the collection containing S as a subsequence. Confidence is the proportion of sequences labeled as C in the sequences containing the S. These parameters are important to evaluate whether a CSR is reliable or not. LSR is a labeling rule. It maps an input sequence pattern 𝑆(𝑠1𝑠2 … 𝑠𝑖… 𝑠𝑛) to a labeled sequence 𝑆′(𝑠1𝑠2 … 𝑙𝑖… 𝑠𝑛) by replacing one token (𝑠𝑖) in the input sequence with a designated label (𝑙𝑖). This token is referred as the anchor. The anchor in the input sequence could be extracted if its corresponding label in the labeled sequence is what we want (in our case, a comparator). LSRs are also mined from an annotated corpus, therefore each LSR also have two parameters: support and confidence. They are similarly defined as in CSR. Supervised Comparative Mining Method J&L treated comparative sentence identification as a classification problem and comparative relation extraction as an information extraction problem. They first manually created a set of 83 keywords such as beat, exceed, and outperform that are likely indicators of comparative sentences. These keywords were then used as pivots to create part-of-speech (POS) sequence data. A manually annotated corpus with class information, i.e. comparative or non-comparative, was used to create sequences and CSRs were mined. A Naïve Bayes classifier was trained using the CSRs as features. The classifier was then used to identify comparative sentences. Given a set of comparative sentences, J&L manually annotated two comparators with labels $ES1 and $ES2 and the feature compared with label $FT for each sentence. J&L‟s method was only applied to noun and pronoun. To differentiate noun and pronoun that are not comparators or features, they added the fourth label $NEF, i.e. non-entity-feature. These labels were used as pivots together with special tokens li & rj 1 (token position), #start (beginning of a sentence), and #end (end of a sentence) to generate sequence data, sequences with single label only and minimum support greater than 1% are retained, and then LSRs were created. When applying the learned LSRs for extraction, LSRs with higher confidence were applied first. J&L‟s method have been proved effective in their experimental setups. However, it has the following weaknesses:  The performance of J&L‟s method relies heavily on a set of comparative sentence indicative keywords. These keywords were manually created and they offered no guidelines to select keywords for inclusion. It is also difficult to ensure the completeness of the keyword list.  Users can express comparative sentences or questions in many different ways. To have high recall, a large annotated training corpus is necessary. This is an expensive process.  Example CSRs and LSRs given in Jindal & Liu (2006b) are mostly a combination of POS tags and keywords. It is a surprise that their rules achieved high precision but low recall. They attributed most errors to POS tagging errors. However, we suspect that their rules might be too specific and overfit their small training set (about 2,600 sentences). We would like to increase recall, avoid overfitting, and allow rules to include discriminative lexical tokens to retain precision. In the next section, we introduce our method to address these shortcomings. 3 Weakly Supervised Method for Comparator Mining Our weakly supervised method is a pattern-based approach similar to J&L‟s method, but it is different in many aspects: Instead of using separate CSRs and LSRs, our method aims to learn se 1 li marks a token is at the ith position to the left of the pivot and rj marks a token is at jth position to the right of the pivot where i and j are between 1 and 4 in J&L (2006b). 652 quential patterns which can be used to identify comparative question and extract comparators simultaneously. In our approach, a sequential pattern is defined as a sequence S(s1s2 … si … sn) where si can be a word, a POS tag, or a symbol denoting either a comparator ($C), or the beginning (#start) or the end of a question (#end). A sequential pattern is called an indicative extraction pattern (IEP) if it can be used to identify comparative questions and extract comparators in them with high reliability. We will formally define the reliability score of a pattern in the next section. Once a question matches an IEP, it is classified as a comparative question and the token sequences corresponding to the comparator slots in the IEP are extracted as comparators. When a question can match multiple IEPs, the longest IEP is used 2 . Therefore, instead of manually creating a list of indicative keywords, we create a set of IEPs. We will show how to acquire IEPs automatically using a bootstrapping procedure with minimum supervision by taking advantage of a large unlabeled question collection in the following subsections. The evaluations shown in section 4 confirm that our weakly supervised method can achieve high recall while retain high precision. This pattern definition is inspired by the work of Ravichandran and Hovy (2002). Table 1 shows some examples of such sequential patterns. We also allow POS constraint on comparators as shown in the pattern “<, $C/NN or $C/NN ? #end>”. It means that a valid comparator must have a NN POS tag. 3.1 Mining Indicative Extraction Patterns Our weakly supervised IEP mining approach is based on two key assumptions: 2 It is because the longest IEP is likely to be the most specific and relevant pattern for the given question. Figure 1: Overview of the bootstrapping alogorithm  If a sequential pattern can be used to extract many reliable comparator pairs, it is very likely to be an IEP.  If a comparator pair can be extracted by an IEP, the pair is reliable. Based on these two assumptions, we design our bootstrapping algorithm as shown in Figure 1. The bootstrapping process starts with a single IEP. From it, we extract a set of initial seed comparator pairs. For each comparator pair, all questions containing the pair are retrieved from a question collection and regarded as comparative questions. From the comparative questions and comparator pairs, all possible sequential patterns are generated and evaluated by measuring their reliability score defined later in the Pattern Evaluation section. Patterns evaluated as reliable ones are IEPs and are added into an IEP repository. Then, new comparator pairs are extracted from the question collection using the latest IEPs. The new comparators are added to a reliable comparator repository and used as new seeds for pattern learning in the next iteration. All questions from which reliable comparators are extracted are removed from the collection to allow finding new patterns efficiently in later iterations. The process iterates until no more new patterns can be found from the question collection. There are two key steps in our method: (1) pattern generation and (2) pattern evaluation. In the following subsections, we will explain them in details. Pattern Generation To generate sequential patterns, we adapt the surface text pattern mining method introduced in (Ravichandran and Hovy, 2002). For any given comparative question and its comparator pairs, comparators in the question are replaced with symbol $Cs. Two symbols, #start and #end, are attached to the beginning and the end of a senSequential Patterns <#start which city is better, $C or $C ? #end> <, $C or $C ? #end> <#start $C/NN or $C/NN ? #end> <which NN is better, $C or $C ?> <which city is JJR, $C or $C ?> <which NN is JJR, $C or $C ?> ... Table 1: Candidate indicative extraction pattern (IEP) examples of the question “which city is better, NYC or Paris?” 653 tence in the question. Then, the following three kinds of sequential patterns are generated from sequences of questions:  Lexical patterns: Lexical patterns indicate sequential patterns consisting of only words and symbols ($C, #start, and #end). They are generated by suffix tree algorithm (Gusfield, 1997) with two constraints: A pattern should contain more than one $C, and its frequency in collection should be more than an empirically determined number 𝛽.  Generalized patterns: A lexical pattern can be too specific. Thus, we generalize lexical patterns by replacing one or more words with their POS tags. 2𝑛−1 generalized patterns can be produced from a lexical pattern containing N words excluding $Cs.  Specialized patterns: In some cases, a pattern can be too general. For example, although a question “ipod or zune?” is comparative, the pattern “<$C or $C>” is too general, and there can be many noncomparative questions matching the pattern, for instance, “true or false?”. For this reason, we perform pattern specialization by adding POS tags to all comparator slots. For example, from the lexical pattern “<$C or $C>” and the question “ipod or zune?”, “<$C/NN or $C/NN?>” will be produced as a specialized pattern. Note that generalized patterns are generated from lexical patterns and the specialized patterns are generated from the combined set of generalized patterns and lexical patterns. The final set of candidate patterns is a mixture of lexical patterns, generalized patterns and specialized patterns. Pattern Evaluation According to our first assumption, a reliability score 𝑅𝑘(𝑝𝑖) for a candidate pattern 𝑝𝑖 at iteration k can be defined as follows: 𝑅𝑘 𝑝𝑖 = 𝑁𝑄(𝑝𝑖→𝑐𝑝𝑗) ∀𝑐𝑝𝑗∈𝐶𝑃𝑘−1 𝑁𝑄(𝑝𝑖→∗) (1) , where 𝑝𝑖 can extract known reliable comparator pairs 𝑐𝑝𝑗. 𝐶𝑃𝑘−1 indicates the reliable comparator pair repository accumulated until the (𝑘−1)𝑡ℎ iteration. 𝑁𝑄(𝑥) means the number of questions satisfying a condition x. The condition 𝑝𝑖→𝑐𝑝𝑗 denotes that 𝑐𝑝𝑗 can be extracted from a question by applying pattern 𝑝𝑖 while the condition 𝑝𝑖→∗ denotes any question containing pattern 𝑝𝑖. However, Equation (1) can suffer from incomplete knowledge about reliable comparator pairs. For example, very few reliable pairs are generally discovered in early stage of bootstrapping. In this case, the value of Equation (1) might be underestimated which could affect the effectiveness of equation (1) on distinguishing IEPs from non-reliable patterns. We mitigate this problem by a lookahead procedure. Let us denote the set of candidate patterns at the iteration k by 𝑃 𝑘. We define the support 𝑆 for comparator pair 𝑐𝑝 𝑖 which can be extracted by 𝑃 𝑘 and does not exist in the current reliable set: 𝑆 𝑐𝑝 𝑖 = 𝑁𝑄( 𝑃 𝑘→𝑐𝑝 𝑖) (2) where 𝑃 𝑘→𝑐𝑝 𝑖 means that one of the patterns in 𝑃 𝑘 can extract 𝑐𝑝 𝑖 in certain questions. Intuitively, if 𝑐𝑝 𝑖 can be extracted by many candidate patterns in 𝑃 𝑘, it is likely to be extracted as a reliable one in the next iteration. Based on this intuition, a pair 𝑐𝑝 𝑖 whose support S is more than a threshold 𝛼 is regarded as a likely-reliable pair. Using likely-reliable pairs, lookahead reliability score 𝑅 𝑝𝑖 is defined: 𝑅 𝑘 𝑝𝑖 = 𝑁𝑄(𝑝𝑖→𝑐𝑝 i) ∀𝑐𝑝 𝑖∈𝐶𝑃 𝑟𝑒𝑙 𝑘 𝑁𝑄(𝑝𝑖→∗) (3) , where 𝐶𝑃 𝑟𝑒𝑙 𝑘 indicates a set of likely-reliable pairs based on 𝑃 𝑘. By interpolating Equation (1) and (3), the final reliability score 𝑅(𝑝𝑖)𝑓𝑖𝑛𝑎𝑙 𝑘 for a pattern is defined as follows: 𝑅(𝑝𝑖)𝑓𝑖𝑛𝑎𝑙 𝑘 = 𝜆∙𝑅𝑘 𝑝𝑖 + (1 −𝜆) ∙𝑅 𝑘(𝑝𝑖) (4) Using Equation (4), we evaluate all candidate patterns and select patterns whose score is more than threshold 𝛾 as IEPs. All necessary parameter values are empirically determined. We will explain how to determine our parameters in section 4. 4 Experiments 4.1 Experiment Setup Source Data All experiments were conducted on about 60M questions mined from Yahoo! Answers‟ question title field. The reason that we used only a title 654 field is that they clearly express a main intention of an asker with a form of simple questions in general. Evaluation Data Two separate data sets were created for evaluation. First, we collected 5,200 questions by sampling 200 questions from each Yahoo! Answers category3. Two annotators were asked to label each question manually as comparative, noncomparative, or unknown. Among them, 139 (2.67%) questions were classified as comparative, 4,934 (94.88%) as non-comparative, and 127 (2.44%) as unknown questions which are difficult to assess. We call this set SET-A. Because there are only 139 comparative questions in SET-A, we created another set which contains more comparative questions. We manually constructed a keyword set consisting of 53 words such as “or” and “prefer”, which are good indicators of comparative questions. In SET-A, 97.4% of comparative questions contains one or more keywords from the keyword set. We then randomly selected another 100 questions from each Yahoo! Answers category with one extra condition that all questions have to contain at least one keyword. These questions were labeled in the same way as SET-A except that their comparators were also annotated. This second set of questions is referred as SET-B. It contains 853 comparative questions and 1,747 noncomparative questions. For comparative question identification experiments, we used all labeled questions in SET-A and SET-B. For comparator extraction experiments, we used only SET-B. All the remaining unlabeled questions (called as SET-R) were used for training our weakly supervised method. As a baseline method, we carefully implemented J&L‟s method. Specifically, CSRs for comparative question identification were learned from the labeled questions, and then a statistical classifier was built by using CSR rules as features. We examined both SVM and Naïve Bayes (NB) models as reported in their experiments. For the comparator extraction, LSRs were learned from SET-B and applied for comparator extraction. To start the bootstrapping procedure, we applied the IEP “<#start nn/$c vs/cc nn/$c ?/. #end>” to all the questions in SET-R and gathered 12,194 comparator pairs as the initial seeds. For our weakly supervised method, there 3 There are 26 top level categories in Yahoo! Answers. are four parameters, i.e. α, β, γ, and λ, need to be determined empirically. We first mined all possible candidate patterns from the suffix tree using the initial seeds. From these candidate patterns, we applied them to SET-R and got a new set of 59,410 candidate comparator pairs. Among these new candidate comparator pairs, we randomly selected 100 comparator pairs and manually classified them into reliable or non-reliable comparators. Then we found 𝛼 that maximized precision without hurting recall by investigating frequencies of pairs in the labeled set. By this method, 𝛼 was set to 3 in our experiments. Similarly, the threshold parameters 𝛽 and 𝛾 for pattern evaluation were set to 10 and 0.8 respectively. For the interpolation parameter 𝜆 in Equation (3), we simply set the value to 0.5 by assuming that two reliability scores are equally important. As evaluation measures for comparative question identification and comparator extraction, we used precision, recall, and F1-measure. All results were obtained from 5-fold cross validation. Note that J&L‟s method needs a training data but ours use the unlabeled data (SET-R) with weakly supervised method to find parameter setting. This 5-fold evaluation data is not in the unlabeled data. Both methods were tested on the same test split in the 5-fold cross validation. All evaluation scores are averaged across all 5 folds. For question processing, we used our own statistical POS tagger developed in-house4. 4.2 Experiment Results Comparative Question Identification and Comparator Extraction Table 2 shows our experimental results. In the table, “Identification only” indicates the performances in comparative question identification, “Extraction only” denotes the performances of comparator extraction when only comparative questions are used as input, and “All” indicates the end-to-end performances when question identification results were used in comparator extraction. Note that the results of J&L‟s method on our collections are very comparable to what is reported in their paper. In terms of precision, the J&L‟s method is competitive to our method in comparative ques 4 We used NLC-PosTagger which is developed by NLC group of Microsoft Research Asia. It uses the modified Penn Treebank POS set for its output; for example, NNS (plural nouns), NN (nouns), NP (noun phrases), NPS (plural noun phrases), VBZ (verb, present tense, 3rd person singular), JJ (adjective), RB(adverb), and so on. 655 tion identification. However, the recall is significantly lower than ours. In terms of recall, our method outperforms J&L‟s method by 35% and 22% in comparative question identification and comparator extraction respectively. In our analysis, the low recall of J&L‟s method is mainly caused by low coverage of learned CSR patterns over the test set. In the end-to-end experiments, our weakly supervised method performs significantly better than J&L‟s method. Our method is about 55% better in F1-measure. This result also highlights another advantage of our method that identifies comparative questions and extracts comparators simultaneously using one single pattern. J&L‟s method uses two kinds of pattern rules, i.e. CSRs and LSRs. Its performance drops significantly due to error propagations. F1-measure of J&L‟s method in “All” is about 30% and 32% worse than the scores of “Identification only” and “Extraction” only respectively, our method only shows small amount of performance decrease (approximately 7-8%). We also analyzed the effect of pattern generalization and specialization. Table 3 shows the results. Despite of the simplicity of our methods, they significantly contribute to performance improvements. This result shows the importance of learning patterns flexibly to capture various comparative question expressions. Among the 6,127 learned IEPs in our database, 5,930 patterns are generalized ones, 171 are specialized ones, and only 26 patterns are non-generalized and specialized ones. To investigate the robustness of our bootstrapping algorithm for different seed configurations, we compare the performances between two different seed IEPs. The results are shown in Table 4. As shown in the table, the performance of our bootstrapping algorithm is stable regardless of significantly different number of seed pairs generated by the two IEPs. This result implies that our bootstrapping algorithm is not sensitive to the choice of IEP. Table 5 also shows the robustness of our bootstrapping algorithm. In Table 5, „All’ indicates the performances that all comparator pairs from a single seed IEP is used for the bootstrapping, and „Partial‟ indicate the performances using only 1,000 randomly sampled pairs from „All’. As shown in the table, there is no significant performance difference. In addition, we conducted error analysis for the cases where our method fails to extract correct comparator pairs:  23.75% of errors on comparator extraction are due to wrong pattern selection by our simple maximum IEP length strategy.  The remaining 67.63% of errors come from comparative questions which cannot be covered by the learned IEPs. Recall Precision F-score Original Patterns 0.689 0. 449 0.544 + Specialized 0.731 0.602 0.665 + Generalized 0.760 0.776 0.768 Table 3: Effect of pattern specialization and Generalization in the end-to-end experiments. Seed patterns # of resulted seed pairs F-score <#start nn/$c vs/cc nn/$c ?/. #end> 12,194 0.768 <#start which/wdt is/vb better/jjr , nn/$c or/cc nn/$c ?/. #end> 1,478 0.760 Table 4: Performance variation over different initial seed IEPs in the end-to-end experiments Set (# of seed pairs) Recall Precision F-score All (12,194) 0.760 0.774 0.768 Partial (1,000) 0.724 0.763 0.743 Table 5: Performance variation over different sizes of seed pairs generated from a single initial seed IEP “<#start nn/$c vs/cc nn/$c ?/. #end>”. Identification only (SET-A+SET-B) Extraction only (SET-B) All (SET-B) J&L (CSR) Our Method J&L (LSR) Our Method J&L Our Method SVM NB SVM NB Recall 0.601 0.537 0.817* 0.621 0.760* 0.373 0.363 0.760* Precision 0.847 0.851 0.833 0.861 0.916* 0.729 0.703 0.776* F-score 0.704 0.659 0.825* 0.722 0.833* 0.493 0.479 0.768* Table 2: Performance comparison between our method and Jindal and Bing‟s Method (denoted as J&L). The values with * indicate statistically significant improvements over J&L (CSR) SVM or J&L (LSR) according to t-test at p < 0.01 level. 656 Examples of Comparator Extraction By applying our bootstrapping method to the entire source data (60M questions), 328,364 unique comparator pairs were extracted from 679,909 automatically identified comparative questions. Table 6 lists top 10 frequently compared entities for a target item, such as Chanel, Gap, in our question archive. As shown in the table, our comparator mining method successfully discovers realistic comparators. For example, for „Chanel’, most results are high-end fashion brands such as „Dior’ or „Louis Vuitton’, while the ranking results for „Gap’ usually contains similar apparel brands for young people, such as „Old Navy’ or „Banana Republic’. For the basketball player „Kobe‟, most of the top ranked comparators are also famous basketball players. Some interesting comparators are shown for „Canon‟ (the company name). It is famous for different kinds of its products, for example, digital cameras and printers, so it can be compared to different kinds of companies. For example, it is compared to „HP’, „Lexmark’, or „Xerox’, the printer manufacturers, and also compared to „Nikon’, „Sony’, or „Kodak’, the digital camera manufactures. Besides general entities such as a brand or company name, our method also found an interesting comparable entity for a specific item in the experiments. For example, our method recommends „Nikon d40i‟, „Canon rebel xti‟, „Canon rebel xt‟, „Nikon d3000‟, „Pentax k100d‟, „Canon eos 1000d‟ as comparators for the specific camera product „Nikon 40d‟. Table 7 can show the difference between our comparator mining and query/item recommendation. As shown in the table, „Google related searches‟ generally suggests a mixed set of two kinds of related queries for a target entity: (1) queries specified with subtopics for an original query (e.g., „Chanel handbag‟ for „Chanel‟) and (2) its comparable entities (e.g., „Dior‟ for „Chanel‟). It confirms one of our claims that comparator mining and query/item recommendation are related but not the same. 5 Conclusion In this paper, we present a novel weakly supervised method to identify comparative questions and extract comparator pairs simultaneously. We rely on the key insight that a good comparative question identification pattern should extract good comparators, and a good comparator pair should occur in good comparative questions to bootstrap the extraction and identification process. By leveraging large amount of unlabeled data and the bootstrapping process with slight supervision to determine four parameters, we found 328,364 unique comparator pairs and 6,869 extraction patterns without the need of creating a set of comparative question indicator keywords. The experimental results show that our method is effective in both comparative question identification and comparator extraction. It sig Chanel Gap iPod Kobe Canon 1 Dior Old Navy Zune Lebron Nikon 2 Louis Vuitton American Eagle mp3 player Jordan Sony 3 Coach Banana Republic PSP MJ Kodak 4 Gucci Guess by Marciano cell phone Shaq Panasonic 5 Prada ACP Ammunition iPhone Wade Casio 6 Lancome Old Navy brand Creative Zen T-mac Olympus 7 Versace Hollister Zen Lebron James Hp 8 LV Aeropostal iPod nano Nash Lexmark 9 Mac American Eagle outfitters iPod touch KG Pentax 10 Dooney Guess iRiver Bonds Xerox Table 6: Examples of comparators for different entities Chanel Gap iPod Kobe Canon Chanel handbag Gap coupons iPod nano Kobe Bryant stats Canon t2i Chanel sunglass Gap outlet iPod touch Lakers Kobe Canon printers Chanel earrings Gap card iPod best buy Kobe espn Canon printer drivers Chanel watches Gap careers iTunes Kobe Dallas Mavericks Canon downloads Chanel shoes Gap casting call Apple Kobe NBA Canon copiers Chanel jewelry Gap adventures iPod shuffle Kobe 2009 Canon scanner Chanel clothing Old navy iPod support Kobe san Antonio Canon lenses Dior Banana republic iPod classic Kobe Bryant 24 Nikon Table 7: Related queries returned by Google related searches for the same target entities in Table 6. The bold ones indicate overlapped queries to the comparators in Table 6. 657 nificantly improves recall in both tasks while maintains high precision. Our examples show that these comparator pairs reflect what users are really interested in comparing. Our comparator mining results can be used for a commerce search or product recommendation system. For example, automatic suggestion of comparable entities can assist users in their comparison activities before making their purchase decisions. Also, our results can provide useful information to companies which want to identify their competitors. In the future, we would like to improve extraction pattern application and mine rare extraction patterns. How to identify comparator aliases such as „LV’ and „Louis Vuitton‟ and how to separate ambiguous entities such “Paris vs. London” as location and “Paris vs. Nicole” as celebrity are all interesting research topics. We also plan to develop methods to summarize answers pooled by a given comparator pair. 6 Acknowledgement This work was done when the first author worked as an intern at Microsoft Research Asia. References Mary Elaine Califf and Raymond J. Mooney. 1999. Relational learning of pattern-match rules for information extraction. In Proceedings of AAAI’99 /IAAI’99. Claire Cardie. 1997. Empirical methods in information extraction. AI magazine, 18:65–79. Dan Gusfield. 1997. Algorithms on strings, trees, and sequences: computer science and computational biology. Cambridge University Press, New York, NY, USA Taher H. Haveliwala. 2002. Topic-sensitive pagerank. In Proceedings of WWW ’02, pages 517–526. Glen Jeh and Jennifer Widom. 2003. Scaling personalized web search. In Proceedings of WWW ’03, pages 271–279. Nitin Jindal and Bing Liu. 2006a. Identifying comparative sentences in text documents. In Proceedings of SIGIR ’06, pages 244–251. Nitin Jindal and Bing Liu. 2006b. Mining comparative sentences and relations. In Proceedings of AAAI ’06. Zornitsa Kozareva, Ellen Riloff, and Eduard Hovy. 2008. Semantic class learning from the web with hyponym pattern linkage graphs. In Proceedings of ACL-08: HLT, pages 1048–1056. Greg Linden, Brent Smith and Jeremy York. 2003. Amazon.com Recommendations: Item-to-Item Collaborative Filtering. IEEE Internet Computing, pages 76-80. Raymond J. Mooney and Razvan Bunescu. 2005. Mining knowledge from text using information extraction. ACM SIGKDD Exploration Newsletter, 7(1):3–10. Dragomir Radev, Weiguo Fan, Hong Qi, and Harris Wu and Amardeep Grewal. 2002. Probabilistic question answering on the web. Journal of the American Society for Information Science and Technology, pages 408–419. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of ACL ’02, pages 41–47. Ellen Riloff and Rosie Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of AAAI ’99 /IAAI ’99, pages 474–479. Ellen Riloff. 1996. Automatically generating extraction patterns from untagged text. In Proceedings of the 13th National Conference on Artificial Intelligence, pages 1044–1049. Stephen Soderland. 1999. Learning information extraction rules for semi-structured and free text. Machine Learning, 34(1-3):233–272. 658
2010
67
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 659–670, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Towards robust multi-tool tagging. An OWL/DL-based approach Christian Chiarcos University of Potsdam, Germany [email protected] Abstract This paper describes a series of experiments to test the hypothesis that the parallel application of multiple NLP tools and the integration of their results improves the correctness and robustness of the resulting analysis. It is shown how annotations created by seven NLP tools are mapped onto toolindependent descriptions that are defined with reference to an ontology of linguistic annotations, and how a majority vote and ontological consistency constraints can be used to integrate multiple alternative analyses of the same token in a consistent way. For morphosyntactic (parts of speech) and morphological annotations of three German corpora, the resulting merged sets of ontological descriptions are evaluated in comparison to (ontological representation of) existing reference annotations. 1 Motivation and overview NLP systems for higher-level operations or complex annotations often integrate redundant modules that provide alternative analyses for the same linguistic phenomenon in order to benefit from their respective strengths and to compensate for their respective weaknesses, e.g., in parsing (Crysmann et al., 2002), or in machine translation (Carl et al., 2000). The current trend to parallel and distributed NLP architectures (Aschenbrenner et al., 2006; Gietz et al., 2006; Egner et al., 2007; Lu´ıs and de Matos, 2009) opens the possibility of exploring the potential of redundant parallel annotations also for lower levels of linguistic analysis. This paper evaluates the potential benefits of such an approach with respect to morphosyntax (parts of speech, pos) and morphology in German: In comparison to English, German shows a rich and polysemous morphology, and a considerable number of NLP tools are available, making it a promising candidate for such an experiment. Previous research indicates that the integration of multiple part of speech taggers leads to more accurate analyses. So far, however, this line of research focused on tools that were trained on the same corpus (Brill and Wu, 1998; Halteren et al., 2001), or that specialize to different subsets of the same tagset (Zavrel and Daelemans, 2000; Tufis¸, 2000; Borin, 2000). An even more substantial increase in accuracy and detail can be expected if tools are combined that make use of different annotation schemes. For this task, ontologies of linguistic annotations are employed to assess the linguistic information conveyed in a particular annotation and to integrate the resulting ontological descriptions in a consistent and tool-independent way. The merged set of ontological descriptions is then evaluated with reference to morphosyntactic and morphological annotations of three corpora of German newspaper articles, the NEGRA corpus (Skut et al., 1998), the TIGER corpus (Brants et al., 2002) and the Potsdam Commentary Corpus (Stede, 2004, PCC). 2 Ontologies and annotations Various repositories of linguistic annotation terminology have been developed in the last decades, ranging from early texts on annotation standards (Bakker et al., 1993; Leech and Wilson, 1996) over relational data base models (Bickel and Nichols, 2000; Bickel and Nichols, 2002) to more recent formalizations in OWL/RDF (or with OWL/RDF export), e.g., the General Ontology of Linguistic Description (Farrar and Langendoen, 2003, GOLD), the ISO TC37/SC4 Data Category Registry (Ide and Romary, 2004; Kemps659 Snijders et al., 2009, DCR), the OntoTag ontology (Aguado de Cea et al., 2002), or the Typological Database System ontology (Saulwick et al., 2005, TDS). Despite their common level of representation, however, these efforts have not yet converged into a unified and generally accepted ontology of linguistic annotation terminology, but rather, different resources are maintained by different communities, so that a considerable amount of disagreement between them and their respective definitions can be observed.1 Such conceptual mismatches and incompatibilities between existing terminological repositories have been the motivation to develop the OLiA architecture (Chiarcos, 2008) that employs a shallow Reference Model to mediate between (ontological models of) annotation schemes and several existing terminology repositories, incl. GOLD, the DCR, and OntoTag. When an annotation receives a representation in the OLiA Reference Model, it is thus also interpretable with respect to other linguistic ontologies. Therefore, the findings for the OLiA Reference Model in the experiments described below entail similar results for an application of GOLD or the DCR to the same task. 2.1 The OLiA ontologies The Ontologies of Linguistic Annotations – briefly, OLiA ontologies (Chiarcos, 2008) – represent an architecture of modular OWL/DL ontologies that formalize several intermediate steps of the mapping between concrete annotations, a Reference Model and existing terminology repositories (‘External Reference Models’ in OLiA terminology) such as the DCR.2 The OLiA ontologies were originally developed as part of an infrastructure for the sustainable maintenance of linguistic resources (Schmidt et al., 2006) where they were originally applied 1As one example, a GOLD Numeral is a Determiner (Numeral ⊑ Quantifier ⊑ Determiner, http://linguistics-ontology.org/gold/2008/ Numeral), whereas a DCR Numeral is defined on the basis of its semantic function, without any references to syntactic categories (http://www.isocat.org/datcat/DC-1334). Thus, two in two of them is a DCR Numeral but not a GOLD Numeral. 2The OLiA Reference Model is accessible via http://nachhalt.sfb632.uni-potsdam.de/owl/ olia.owl. Several annotation models, e.g., stts.owl, tiger.owl, connexor.owl, morphisto.owl can be found in the same directory together with the corresponding linking files stts-link.rdf, tiger-link.rdf, connexor-link.rdf and morphisto-link.rdf. to the formal representation and documentation of annotation schemes, and for concept-based annotation queries over to multiple, heterogeneous corpora annotated with different annotation schemes (Rehm et al., 2007; Chiarcos et al., 2008). NLP applications of the OLiA ontologies include a proposal to integrate them with the OntoTag ontologies and to use them for interface specifications between modules in NLP pipeline architectures (Buyko et al., 2008). Further, Hellmann (2010) described the application of the OLiA ontologies within NLP2RDF, an OWL-based blackboard approach to assess the meaning of text from grammatical analyses and subsequent enrichment with ontological knowledge sources. OLiA distinguishes three different classes of ontologies: • The OLIA REFERENCE MODEL specifies the common terminology that different annotation schemes can refer to. It is primarily based on a blend of concepts of EAGLES and GOLD, and further extended in accordance with different annotation schemes, with the TDS ontology and with the DCR (Chiarcos, 2010). • Multiple OLIA ANNOTATION MODELs formalize annotation schemes and tag sets. Annotation Models are based on the original documentation and data samples, so that they provide an authentic representation of the annotation not biased with respect to any particular interpretation. • For every Annotation Model, a LINKING MODEL defines subClassOf (⊑) relationships between concepts/properties in the respective Annotation Model and the Reference Model. Linking Models are interpretations of Annotation Model concepts and properties in terms of the Reference Model, and thus multiple alternative Linking Models for the same Annotation Model are possible. Other Linking Models specify ⊑relationships between Reference Model concepts/properties and concepts/properties of an External Reference Model such as GOLD or the DCR. The OLiA Reference Model (namespace olia) specifies concepts that describe linguistic categories (e.g., olia:Determiner) and grammatical features (e.g., olia:Accusative), as well 660 Figure 1: Attributive demonstrative pronouns (PDAT) in the STTS Annotation Model Figure 2: Selected morphosyntactic categories in the OLiA Reference Model Figure 3: Individuals for accusative and singular in the TIGER Annotation Model Figure 4: Selected morphological features in the OLiA Reference Model as properties that define possible relations between those (e.g., olia:hasCase). More general concepts that represent organizational information rather than possible annotations (e.g., MorphosyntacticCategory and CaseFeature) are stored in a separate ontology (namespace olia top). The Reference Model is a shallow ontology: It does not specify disjointness conditions of concepts and cardinality or domain restrictions of properties. Instead, it assumes that such constraints are inherited by means of ⊑relationships from an External Reference Model. Different External Reference Models may take different positions on the issue – as languages do3 –, so that this aspect is left underspecified in the Reference Model. 3Based on primary experience with Western European languages, for example, one might assume that a hasGender property applies to nouns, adjectives, pronouns and determiners only. Yet, this is language-specific restriction: Russian finite verbs, for example, show gender congruency in past tense. Figs. 2 and 4 show excerpts of category and feature hierarchies in the Reference Model. With respect to morphosyntactic annotations (parts of speech, pos) and morphological annotations (morph), five Annotation Models for German are currently available: STTS (Schiller et al., 1999, pos), TIGER (Brants and Hansen, 2002, morph), Morphisto (Zielinski and Simon, 2008, pos, morph), RFTagger (Schmid and Laws, 2008, pos, morph), Connexor (Tapanainen and J¨arvinen, 1997, pos, morph). Further Annotation Models for pos and morph cover five different annotation schemes for English (Marcus et al., 1994; Sampson, 1995; Mandel, 2006; Kim et al., 2003, Connexor), two annotation schemes for Russian (Meyer, 2003; Sharoff et al., 2008), an annotation scheme designed for typological research and currently applied to approx. 30 different languages (Dipper et al., 2007), an annotation scheme for Old High German (Petrova et al., 2009), and an annotation scheme for Tibetan (Wagner and Zeisler, 2004). 661 Figure 5: The STTS tags PDAT and ART, their representation in the Annotation Model and linking with the Reference Model. Annotation Models differ from the Reference Model mostly in that they include not only concepts and properties, but also individuals: Annotation Model concepts reflect an abstract conceptual categorization, whereas individuals represent concrete values used to annotate the corresponding phenomenon. An individual is applicable to all annotations that match the string value specified by this individual’s hasTag, hasTagContaining, hasTagStartingWith, or hasTagEndingWith properties. Fig. 1 illustrates the structure of the STTS Annotation Model (namespace stts) for the individual stts:PDAT that represents the tag used for attributive demonstrative pronouns (demonstrative determiners). Fig. 3 illustrates the individuals tiger:accusative and tiger:singular from the hierarchy of morphological features in the TIGER Annotation Model (namespace tiger). Fig. 5 illustrates the linking between the STTS Annotation Model and the OLiA Reference Model for the individuals stts:PDAT and stts:ART. 2.2 Integrating different morphosyntactic and morphological analyses With the OLiA ontologies as described above, annotations from different annotation schemes can now be interpreted in terms of the OLiA Reference Model (or External Reference Models like GOLD or the DCR). As an example, consider the attributive demonstrative pronoun diese in (1). (1) Diese this nicht not neue new Erkenntnis insight konnte could der the Markt market der of.the M¨oglichkeiten possibilities am on.the Sonnabend Saturday in in Treuenbrietzen Treuenbrietzen bestens in.the.best.way unterstreichen underline . ‘The ‘Market of Possibilities’, held this Saturday in Treuenbrietzen, provided best evidence for this well-known (lit. ‘not new’) insight.’ (PCC, #4794) The phrase diese nicht neue Erkenntnis poses two challenges. First, it has to be recognized that the demonstrative pronoun is attributive, although it is separated from adjective and noun by nicht ‘not’. Second, the phrase is in accusative case, although the morphology is ambiguous between accusative and nominative, and nominative case would be expected for a sentence-initial NP. The Connexor analysis (Tapanainen and J¨arvinen, 1997) actually fails in both aspects (2). (2) PRON Dem FEM SG NOM (Connexor) The ontological analysis of this annotation begins by identifying the set of individuals from the Connexor Annotation Model that match it according to their hasTag (etc.) properties. The RDF triplet connexor:NOM connexor:hasTagContaining ‘NOM’4 indicates that the tag is an application of the individual connexor:NOM, an instance of connexor:Case. Further, the annotation matches connexor:PRON (an instance of connexor:Pronoun), etc. The result is a set of individuals that express different aspects of the meaning of the annotation. For these individuals, the Annotation Model specifies superclasses (rdf:type) and other properties, i.e., connexor:NOM connexor:hasCase connexor:NOM, etc. The linguistic unit represented by the actual token can now be characterized by these properties: Every property applicable to a member in the individual set is assumed to be applicable to the linguistic unit as well. In order to save space, we use a notation closer to predicate logic (with the token as implicit subject). In terms of the Annotation Model, the token diese is thus described by the following descriptions: 4RDF triplets are quoted in simplified form, with XML namespaces replacing the actual URIs. 662 (3) rdf:type(connexor:Pronoun) connexor:hasCase(connexor:NOM) ... The Linking Model connexor-link.rdf provides us with the information that (i) connexor:Pronoun is a subclass of the Reference Model concept olia:Pronoun, (ii) connexor:NOM is an instance of the Reference Model concept olia:Nominative, and (iii) olia:hasCase is a subproperty of olia:hasCase. Accordingly, the predicates that describe the token diese can be reformulated in terms of the Reference Model. rdf:type(connexor:Pronoun) entails rdf:type(olia:Pronoun), etc. Similarly, we know that for some i:olia:Nominative it is true that olia:hasCase(i), abbreviated here as olia:hasCase(some olia:Nominative). In this way, the grammatical information conveyed in the original Connexor annotation can be represented in an annotation-independent and tagset-neutral way as shown for the Connexor analysis in (4). (4) rdf:type(olia:PronounOrDeterminer) rdf:type(olia:Pronoun) olia:hasNumber(some olia:Singular) olia:hasGender(some olia:Feminine) rdf:type(olia:DemonstrativePronoun) olia:hasCase(some olia:Nominative) Analogously, the corresponding RFTagger analysis (Schmid and Laws, 2008) given in (5) can be transformed into a description in terms of the OLiA Reference Model such as in (6). (5) PRO.Dem.Attr.-3.Acc.Sg.Fem (RFTagger) (6) rdf:type(olia:PronounOrDeterminer) olia:hasNumber(some olia:Singular) olia:hasGender(some olia:Feminine) olia:hasCase(some olia:Accusative) rdf:type(olia:DemonstrativeDeterminer) rdf:type(olia:Determiner) For every description obtained from these (and further) analyses, an integrated and consistent generalization can be established as described in the following section. 3 Processing linguistic annotations 3.1 Evaluation setup Fig. 6 sketches the architecture of the evaluation environment set up for this study.5 The input to the system is a set of documents with 5The code used for the evaluation setup is available under http://multiparse.sourceforge.net. Figure 6: Evaluation setup TIGER/NEGRA-style morphosyntactic or morphological annotation (Skut et al., 1998; Brants and Hansen, 2002) whose annotations are used as gold standard. From the annotated document, the plain tokenized text is extracted and analyzed by one or more of the following NLP tools: (i) Morphisto, a morphological analyzer without contextual disambiguation (Zielinski and Simon, 2008), (ii) two part of speech taggers: the TreeTagger (Schmid, 1994) and the Stanford Tagger (Toutanova et al., 2003), (iii) the RFTagger that performs part of speech and morphological analysis (Schmid and Laws, 2008), (iv) two PCFG parsers: the StanfordParser (Klein and Manning, 2003) and the BerkeleyParser (Petrov and Klein, 2007), and (v) the Connexor dependency parser (Tapanainen and J¨arvinen, 1997). These tools annotate parts of speech, and those in (i), (iii) and (v) also provide morphological features. All components ran in parallel threads on the same machine, with the exception of Morphisto that was addressed as a web service. The set of matching Annotation Model individuals for every annotation and the respective set of Reference Model descriptions are determined by means of 663 OLiA description P Morphisto Connexor RF Tree Stanford Stanford Berkeley Tagger Tagger Tagger Parser Parser word class type(...) PronounOrDeterminer 7 1(4/4)∗ 1 1 1 1 1 1 Determiner 5.5 0.5∗∗ 0 1 1 1 1 1 DemonstrativeDeterminer 5.5 0.5∗∗ 0 1 1 1 1 1 Pronoun 1.5 0.5∗∗ 1 0 0 0 0 0 DemonstrativePronoun 1.5 0.5∗∗ 1 0 0 0 0 0 morphology hasXY(...) n/a n/a n/a n/a hasNumber(some Singular) 2.5 0.5 (2/4) 1 1 ∗Morphisto produces four alternative candidate analyses hasGender(some Feminine) 2.5 0.5 (2/4) 1 1 for this example, so every alternative analysis receives the hasCase(some Accusative) 1.5 0.5 (2/4) 0 1 confidence score 0.25 hasCase(some Nominative) 1.5 0.5 (2/4) 1 0 ∗∗Morphisto does not distinguish attributive and substitutive hasNumber(some Plural) 0.5 0.5 (2/4) 0 0 pronouns, it predicts type(Determiner ⊔Pronoun) Table 1: Confidence scores for diese in ex. (1) the Pellet reasoner (Sirin et al., 2007) as described above. A disambiguation routine (see below) then determines the maximal consistent set of ontological descriptions. Finally, the outcome of this process is compared to the set of descriptions corresponding to the original annotation in the corpus. 3.2 Disambiguation Returning to examples (4) and (6) above, we see that the resulting set of descriptions conveys properties that are obviously contradicting, e.g., hasCase(some Nominative) besides hasCase(some Accusative). Our approach to disambiguation combines ontological consistency criteria with a confidence ranking. As we simulate an uninformed approach, the confidence ranking follows a majority vote. For diese in (1), the consultation of all seven tools results a confidence ranking as shown in Tab. 1: If a tool supports a description with its analysis, the confidence score is increased by 1 (or by 1/n if the tool proposes n alternative annotations). A maximal consistent set of descriptions is then established as follows: (i) Given a confidence-ranked list of available descriptions S = (s1, ..., sn) and a result set T = ∅. (ii) Let s1 be the first element of S = (s1, ..., sn). (iii) If s1 is consistent with every description t ∈ T, then add s1 to T: T := T ∪{s1} (iv) Remove s1 from S and iterate in (ii) until S is empty. The consistency of ontological descriptions is defined here as follows:6 • Two concepts A and B are consistent iff A ≡B or A ⊑B or B ⊑A Otherwise, A and B are disjoint. • Two descriptions pred1(A) and pred2(B) are consistent iff A and B are consistent or pred1 is neither a subproperty nor a superproperty of pred2 This heuristic formalizes an implicit disjointness assumption for all concepts in the ontology (all concepts are disjoint unless one is a subconcept of the other). Further, it imposes an implicit cardinality constraint on properties (e.g., hasCase(some Accusative) and hasCase(some Nominative) are inconsistent because Accusative and Nominative are sibling concepts and thus disjoint). For the example diese, the descriptions type(Pronoun) and type(DemonstrativePronoun) are inconsistent with type(Determiner), and hasNumber(some Plural) is inconsistent with hasNumber(some Singular) (Figs. 2 and 4); these descriptions are thus ruled out. The hasCase descriptions have identical confidence scores, so that the first hasCase description that the algorithm encounters is chosen for the set of resulting descriptions, the other one is ruled out because of their inconsistency. 6The OLiA Reference Model does not specify disjointness constraints, and neither do GOLD or the DCR as External Reference Models. The axioms of the OntoTag ontologies, however, are specific to Spanish and cannot be directly applied to German. 664 PCC TIGER NEGRA best-performing tool (StanfordTagger) .960 .956 .990∗ average (and std. deviation) for tool combinations 1 tool .868 (.109) .864 (.122) .870 (.113) 2 tools .928 (.018) .931 (.021) .943 (.028) 3 tools .947 (.014) .948 (.013) .956 (.018) 4 tools .956 (.006) .955 (.009) .963 (.013) 5 tools .959 (.006) .960 (.007) .964 (.009) 6 tools .963 (.003) .963 (.007) .965 (.007) all tools .967 .960 .965 ∗The Stanford Tagger was trained on the NEGRA corpus. Table 2: Recall for rdf:type descriptions for word classes TIGER NEGRA 1 tool .678 (.106) .660 (.091) Morphisto .573 .568 Connexor .674 .662 RFTagger .786 .751 2 tools .761 (.019) .740 (.012) C+M .738 .730 M+R .769 .737 C+R .773 .753 all tools .791 .770 Table 3: Recall for morphological hasXY() descriptions The resulting, maximal consistent set of descriptions is then compared with the ontological descriptions that correspond to the original annotation in the corpus. 4 Evaluation Six experiments were conducted with the goal to evaluate the prediction of word classes and morphological features on parts of three corpora of German newspaper articles: NEGRA (Skut et al., 1998), TIGER (Brants et al., 2002), and the Potsdam Commentary Corpus (Stede, 2004, PCC). From every corpus 10,000 tokens were considered for the analysis. TIGER and NEGRA are well-known resources that also influenced the design of several of the tools considered. For this reason, the PCC was consulted, a small collection of newspaper commentaries, 30,000 tokens in total, annotated with TIGER-style parts of speech and syntax (by members of the TIGER project). None of the tools considered here were trained on this data, so that it provides independent test data. The ontological descriptions were evaluated for recall:7 (7) recall(T) = P n i=1 |Dpredicted(ti)∩Dtarget(ti)| P n i=1 |Dtarget(ti)| In (7), T is a text (a list of tokens) with T = (t1, ..., tn), Dpredicted(t) are descriptions retrieved from the NLP analyses of the token t, and Dtarget(t) is the set of descriptions that correspond to the original annotation of t in the corpus. 7Precision and accuracy may not be appropriate measurements in this case: Annotation schemes differ in their expressiveness, so that a description predicted by an NLP tool but not found in the reference annotation may nevertheless be correct. The RFTagger, for example, assigns demonstrative pronouns the feature ‘3rd person’, that is not found in TIGER/NEGRA-style annotation because of its redundancy. 4.1 Word classes Table 2 shows that the recall of rdf:type descriptions (for word classes) increases continuously with the number of NLP tools applied. The combination of all seven tools actually shows a better recall than the best-performing single NLP tool. (The NEGRA corpus is an apparent exception only; the exceptionally high recall of the Stanford Tagger reflects the fact that it was trained on NEGRA.) A particularly high increase in recall occurs when tools are combined that compensate for their respective deficits. Morphisto, for example, generates alternative morphological analyses, so that the disambiguation algorithm performs a random choice between these. Morphisto has thus the worst recall among all tools considered (PCC .69, TIGER .65, NEGRA .70 for word classes). As compared to this, Connexor performs a contextual disambiguation; its recall is, however, limited by its coarse-grained word classes (PCC .73, TIGER .72, NEGRA .73). The combination of both tools yields a more detailed and context-sensitive analysis and thus results in a boost in recall by more than 13% (PCC .87, TIGER .86, NEGRA .86). 4.2 Morphological features For morphological features, Tab. 3 shows the same tendencies that were also observed for word classes: The more tools are combined, the greater the recall of the generated descriptions, and the recall of combined tools often outperforms the recall of individual tools. The three tools that provide morphological annotations (Morphisto, Connexor, RFTagger) were evaluated against 10,000 tokens from TIGER and NEGRA respectively. The best-performing tool was the RFTagger, which possibly reflects the fact 665 that it was trained on TIGER-style annotations, whereas Morphisto and Connexor were developed on the basis of independent resources and thus differ from the reference annotation in their respective degree of granularity. 5 Summary and Discussion With the ontology-based approach described in this paper, the performance of annotation tools can be evaluated on a conceptual basis rather than by means of a string comparison with target annotations. A formal model of linguistic concepts is extensible, finer-grained and, thus, potentially more adequate for the integration of linguistic annotations than string-based representations, especially for heterogeneous annotations, if the tagsets involved are structured according to different design principles (e.g., due to different terminological traditions, different communities involved, etc.). It has been shown that by abstracting from tool-specific representations of linguistic annotations, annotations from different tagsets can be represented with reference to the OLiA ontologies (and/or with other OWL/RDF-based terminology repositories linked as External Reference Models). In particular, it is possible to compare an existing reference annotation with annotations produced by NLP tools that use independently developed and differently structured annotation schemes (such as Connexor vs. RFTagger vs. Morphisto). Further, an algorithm for the integration of different annotations has been proposed that makes use of a majority-based confidence ranking and ontological consistency conditions. As consistency conditions are not formally defined in the OLiA Reference Model (which is expected to inherit such constraints from External Reference Models), a heuristic, structure-based definition of consistency was applied. This heuristic consistency definition is overly rigid and rules out a number of consistent alternative analyses, as it is the case for overlapping categories.8 Despite this rigidity, we witness an increase of recall when multiple alternative analyses are integrated. This increase of recall may result from a compensation of tool-specific deficits, e.g., with respect to annotation granularity. Also, the improved recall can be explained by a compensation of overfitting, or deficits that are inherent to 8Preposition-determiner compounds like German am ‘on the’, for example, are both prepositions and determiners. a particular approach (e.g., differences in the coverage of the linguistic context). It can thus be stated that the integration of multiple alternative analyses has the potential to produce linguistic analyses that are both more robust and more detailed than those of the original tools. The primary field of application of this approach is most likely to be seen in a context where applications are designed that make direct use of OWL/RDF representations as described, for example, by Hellmann (2010). It is, however, also possible to use ontological representations to bootstrap novel and more detailed annotation schemes, cf. Zavrel and Daelemans (2000). Further, the conversion from string-based representations to ontological descriptions is reversible, so that results of ontology-based disambiguation and validation can also be reintegrated with the original annotation scheme. The idea of such a reversion algorithm was sketched by Buyko et al. (2008) where the OLiA ontologies were suggested as a means to translate between different annotation schemes.9 6 Extensions and Related Research Natural extensions of the approach described in this paper include: (i) Experiments with formally defined consistency conditions (e.g., with respect to restrictions on the domain of properties). (ii) Context-sensitive disambiguation of morphological features (e.g., by combination with a chunker and adjustment of confidence scores for morphological features over all tokens in the current chunk, cf. Kermes and Evert, 2002). (iii) Replacement of majority vote by more elaborate strategies to merge grammatical analyses. 9The mapping from ontological descriptions to tags of a particular scheme is possible, but neither trivial nor necessarily lossless: Information of ontological descriptions that cannot be expressed in the annotation scheme under consideration (e.g., the distinction between attributive and substitutive pronouns in the Morphisto scheme) will be missing in the resulting string representation. For complex annotations, where ontological descriptions correspond to different substrings, an additional ‘tag grammar’ may be necessary to determine the appropriate ordering of substrings according to the annotation scheme (e.g., in the Connexor analysis). 666 (iv) Application of the algorithm for the ontological processing of node labels and edge labels in syntax annotations. (v) Integration with other ontological knowledge sources in order to improve the recall of morphosyntactic and morphological analyses (e.g., for disambiguating grammatical case). Extensions (iii) and (iv) are currently pursued in an ongoing research effort described by Chiarcos et al. (2010). Like morphosyntactic and morphological features, node and edge labels of syntactic trees are ontologically represented in several Annotation Models, the OLiA Reference Model, and External Reference Models, the merging algorithm as described above can thus be applied for syntax, as well. Syntactic annotations, however, involve the additional challenge to align different structures before node and edge labels can be addressed, an issue not further discussed here for reasons of space limitations. Alternative strategies to merge grammatical analyses may include alternative voting strategies as discussed in literature on classifier combination, e.g., weighted majority vote, pairwise voting (Halteren et al., 1998), credibility profiles (Tufis¸, 2000), or hand-crafted rules (Borin, 2000). A novel feature of our approach as compared to existing applications of these methods is that confidence scores are not attached to plain strings, but to ontological descriptions: Tufis¸, for example, assigned confidence scores not to tools (as in a weighted majority vote), but rather, assessed the ‘credibility’ of a tool with respect to the predicted tag. If this approach is applied to ontological descriptions in place of tags, it allows us to consider the credibility of pieces of information regardless of the actual string representation of tags. For example, the credibility of hasCase descriptions can be assessed independently from the credibility of hasGender descriptions even if the original annotation merged both aspects in one single tag (as the RFTagger does, for example, cf. ex. 5). Extension (v) has been addressed in previous research, although mostly with the opposite perspective: Already Cimiano and Reyle (2003) noted that the integration of grammatical and semantic analyses may be used to resolve ambiguity and underspecifications, and this insight has also motivated the ontological representation of linguistic resources such as WordNet (Gangemi et al., 2003), FrameNet (Scheffczyk et al., 2006), the linking of corpora with such ontologies (Hovy et al., 2006), the modelling of entire corpora in OWL/DL (Burchardt et al., 2008), and the extension of existing ontologies with ontological representations of selected linguistic features (Buitelaar et al., 2006; Davis et al., 2008). Aguado de Cea et al. (2004) sketched an architecture for the closer ontology-based integration of grammatical and semantic information using OntoTag and several NLP tools for Spanish. Aguado de Cea et al. (2008) evaluate the benefits of this approach for the Spanish particle se, and conclude for this example that the combination of multiple tools yields more detailed and more accurate linguistic analyses of particularly problematic, polysemous function words. A similar increase in accuracy has also been repeatedly reported for ensemble combination approaches, that are, however, limited to tools that produce annotations according to the same tagset (Brill and Wu, 1998; Halteren et al., 2001). These observations provide further support for our conclusion that the ontology-based integration of morphosyntactic analyses enhances both the robustness and the level of detail of morphosyntactic and morphological analyses. Our approach extends the philosophy of ensemble combination approaches to NLP tools that do not only employ different strategies and philosophies, but also different annotation schemes. Acknowledgements From 2005 to 2008, the research on linguistic ontologies described in this paper was funded by the German Research Foundation (DFG) in the context of the Collaborative Research Center (SFB) 441 “Linguistic Data Structures”, Project C2 “Sustainability of Linguistic Resources” (University of T¨ubingen), and since 2007 in the context of the SFB 632 “Information Structure”, Project D1 “Linguistic Database” (University of Potsdam). The author would also like to thank Julia Ritz, Angela Lahee, Olga Chiarcos and three anonymous reviewers for helpful hints and comments. 667 References G. Aguado de Cea, ´A. I. de Mon-Rego, A. Pareja-Lora, and R. Plaza-Arteche. 2002. OntoTag: A semantic web page linguistic annotation model. In Proceedings of the ECAI 2002 Workshop on Semantic Authoring, Annotation and Knowledge Markup, Lyon, France, July. G. Aguado de Cea, A. Gomez-Perez, I. Alvarez de Mon, and A. Pareja-Lora. 2004. OntoTag’s linguistic ontologies: Improving semantic web annotations for a better language understanding in machines. In Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC’04), Las Vegas, Nevada, USA, April. G. Aguado de Cea, J. Puch, and J. ´A. Ramos. 2008. Tagging Spanish texts: The problem of “se”. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco, May. A. Aschenbrenner, P. Gietz, M.W. K¨uster, C. Ludwig, and H. Neuroth. 2006. TextGrid. A modular platform for collaborative textual editing. In Proceedings of the International Workshop on Digital Library Goes e-Science (DLSci06), pages 27–36, Alicante, Spain, September. D. Bakker, O. Dahl, M. Haspelmath, M. KoptjevskajaTamm, C. Lehmann, and A. Siewierska. 1993. EUROTYP guidelines. Technical report, European Science Foundation Programme in Language Typology. B. Bickel and J. Nichols. 2000. The goals and principles of AUTOTYP. http://www.uni-leipzig.de/∼autotyp/ theory.html. version of 01/12/2007. B. Bickel and J. Nichols. 2002. Autotypologizing databases and their use in fieldwork. In Proceedings of the LREC 2002 Workshop on Resources and Tools in Field Linguistics, Las Palmas, Spain, May. L. Borin. 2000. Something borrowed, something blue: Rule-based combination of POS taggers. In Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC 2000), Athens, Greece, May, 31st – June, 2nd. S. Brants and S. Hansen. 2002. Developments in the TIGER annotation scheme and their realization in the corpus. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002), pages 1643–1649, Las Palmas, Spain, May. S. Brants, S. Dipper, S. Hansen, W. Lezius, and G. Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, pages 24–41, Sozopol, Bulgaria, September. E. Brill and J. Wu. 1998. Classifier combination for improved lexical disambiguation. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL 1998), pages 191–195, Montr´eal, Canada, August. P. Buitelaar, T. Declerck, A. Frank, S. Racioppa, M. Kiesel, M. Sintek, R. Engel, M. Romanelli, D. Sonntag, B. Loos, V. Micelli, R. Porzel, and P. Cimiano. 2006. LingInfo: Design and applications of a model for the integration of linguistic information in ontologies. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC 2006), Genoa, Italy, May. A. Burchardt, S. Pad´o, D. Spohr, A. Frank, and U. Heid. 2008. Formalising Multi-layer Corpora in OWL/DL – Lexicon Modelling, Querying and Consistency Control. In Proceedings of the 3rd International Joint Conference on NLP (IJCNLP 2008), Hyderabad, India, January. E. Buyko, C. Chiarcos, and A. Pareja-Lora. 2008. Ontology-based interface specifications for a NLP pipeline architecture. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco, May. M. Carl, C. Pease, L.L. Iomdin, and O. Streiter. 2000. Towards a dynamic linkage of example-based and rule-based machine translation. Machine Translation, 15(3):223–257. C. Chiarcos, S. Dipper, M. G¨otze, U. Leser, A. L¨udeling, J. Ritz, and M. Stede. 2008. A Flexible Framework for Integrating Annotations from Different Tools and Tag Sets. Traitement Automatique des Langues, 49(2). C. Chiarcos, K. Eckart, and J. Ritz. 2010. Creating and exploiting a resource of parallel parses. In 4th Linguistic Annotation Workshop (LAW 2010), held in conjunction with ACL-2010, Uppsala, Sweden, July. C. Chiarcos. 2008. An ontology of linguistic annotations. LDV Forum, 23(1):1–16. Foundations of Ontologies in Text Technology, Part II: Applications. C. Chiarcos. 2010. Grounding an ontology of linguistic annotations in the Data Category Registry. In Workshop on Language Resource and Language Technology Standards (LR&LTS 2010), held in conjunction with LREC 2010, Valetta, Malta, May. P. Cimiano and U. Reyle. 2003. Ontology-based semantic construction, underspecification and disambiguation. In Proceedings of the Lorraine/Saarland Workshop on Prospects and Recent Advances in the Syntax-Semantics Interface, pages 33–38, Nancy, France, October. B. Crysmann, A. Frank, B. Kiefer, S. M¨uller, G. Neumann, J. Piskorski, U. Sch¨afer, M. Siegel, H. Uszkoreit, F. Xu, M. Becker, and H. Krieger. 2002. An 668 integrated architecture for shallow and deep processing. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 441–448, Philadelphia, Pennsylvania, USA, July. B. Davis, S. Handschuh, A. Troussov, J. Judge, and M. Sogrin. 2008. Linguistically light lexical extensions for ontologies. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco, May. S. Dipper, M. G¨otze, and S. Skopeteas, editors. 2007. Information Structure in Cross-Linguistic Corpora: Annotation Guidelines for Phonology, Morphology, Syntax, Semantics, and Information Structure. Interdisciplinary Studies on Information Structure (ISIS), Working Papers of the SFB 632; 7. Universit¨atsverlag Potsdam, Potsdam, Germany. M.T. Egner, M. Lorch, and E. Biddle. 2007. UIMA Grid: Distributed large-scale text analysis. In Proceedings of the Seventh IEEE International Symposium on Cluster Computing and the Grid (CCGRID’07), pages 317–326, Rio de Janeiro, Brazil, May. S. Farrar and D.T. Langendoen. 2003. Markup and the GOLD ontology. In EMELD Workshop on Digitizing and Annotating Text and Field Recordings. Michigan State University, July. A. Gangemi, R. Navigli, and P. Velardi. 2003. The OntoWordNet project: Extension and axiomatization of conceptual relations in WordNet. In R. Meersman and Z. Tari, editors, Proceedings of On the Move to Meaningful Internet Systems (OTM2003), pages 820–838, Catania, Italy, November. P. Gietz, A. Aschenbrenner, S. Budenbender, F. Jannidis, M.W. K¨uster, C. Ludwig, W. Pempe, T. Vitt, W. Wegstein, and A. Zielinski. 2006. TextGrid and eHumanities. In Proceedings of the Second IEEE International Conference on e-Science and Grid Computing (E-SCIENCE ’06), pages 133–141, Amsterdam, The Netherlands, December. H. van Halteren, J. Zavrel, and W. Daelmans. 1998. Improving data driven wordclass tagging by system combination. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL 1998), Montr´eal, Canada, August. H. van Halteren, J. Zavrel, and W. Daelmans. 2001. Improving accuracy in word class tagging through the combination of machine learning systems. Computational Linguistics, 27(2):199–229. S. Hellmann. 2010. The semantic gap of formalized meaning. In The 7th Extended Semantic Web Conference (ESWC 2010), Heraklion, Greece, May 30th – June 3rd. E. Hovy, M. Marcus, M. Palmer, L. Ramshaw, and R. Weischedel. 2006. Ontonotes: the 90% solution. In Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (HLT-NAACL 2006), pages 57–60, New York, June. N. Ide and L. Romary. 2004. A registry of standard data categories for linguistic annotation. In Proceedings of the Fourth Language Resources and Evaluation Conference (LREC 2004), pages 135–39, Lisboa, Portugal, May. M. Kemps-Snijders, M. Windhouwer, P. Wittenburg, and S.E. Wright. 2009. ISOcat: remodelling metadata for language resources. International Journal of Metadata, Semantics and Ontologies, 4(4):261– 276. H. Kermes and S. Evert. 2002. YAC – A recursive chunker for unrestricted German text. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002), pages 1805–1812, Las Palmas, Spain, May. J.D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GENIA corpus – A semantically annotated corpus for bio-textmining. Bioinformatics, 19(1):180–182. D. Klein and C.D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430, Sapporo, Japan, July. G. Leech and A. Wilson. 1996. EAGLES recommendations for the morphosyntactic annotation of corpora. Version of March 1996. T. Lu´ıs and D.M. de Matos. 2009. High-performance high-volume layered corpora annotation. In Proceedings of the Third Linguistic Annotation Workshop (LAW-III) held in conjunction with ACL-IJCNLP 2009, pages 99–107, Singapore, August. M. Mandel. 2006. Integrated annotation of biomedical text: Creating the PennBioIE corpus. In Text Mining Ontologies and Natural Language Processing in Biomedicine, Manchester, UK, March. M.P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1994. Building a large annotated corpus of English: The Penn Treebank. Computational linguistics, 19(2):313–330. R. Meyer. 2003. Halbautomatische morphosyntaktische Annotation russischer Texte. In R. Hammel and L. Geist, editors, Linguistische Beitr¨age zur Slavistik aus Deutschland und ¨Osterreich. X. JungslavistInnen-Treffen, Berlin 2001, pages 92– 105. Sagner, M¨unchen. S. Petrov and D. Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (HLT-NAACL 2007), pages 404– 411, Rochester, NY, April. 669 S. Petrova, C. Chiarcos, J. Ritz, M. Solf, and A. Zeldes. 2009. Building and using a richly annotated interlinear diachronic corpus: The case of Old High German Tatian. Traitement automatique des langues et langues anciennes, 50(2):47–71. G. Rehm, R. Eckart, and C. Chiarcos. 2007. An OWLand XQuery-based mechanism for the retrieval of linguistic patterns from XML-corpora. In Proceedings of Recent Advances in Natural Language Processing (RANLP 2007), Borovets, Bulgaria, September. G. Sampson. 1995. English for the computer: The SUSANNE corpus and analytic scheme. Oxford University Press. A. Saulwick, M. Windhouwer, A. Dimitriadis, and R. Goedemans. 2005. Distributed tasking in ontology mediated integration of typological databases for linguistic research. In Proceedings of the 17th Conference on Advanced Information Systems Engineering (CAiSE’05), Porto, Portugal, June. J. Scheffczyk, A. Pease, and M. Ellsworth. 2006. Linking FrameNet to the suggested upper merged ontology. In Proceedings of the Fourth International Conference on Formal Ontology in Information Systems (FOIS 2006), pages 289–300, Baltimore, Maryland, USA, November. A. Schiller, S. Teufel, C. Thielen, and C. St¨ockert. 1999. Guidelines f¨ur das Tagging deutscher Textcorpora mit STTS. Technical report, University of Stuttgart, University of T¨ubingen. H. Schmid and F. Laws. 2008. Estimation of conditional probabilities with decision trees and an application to fine-grained pos tagging. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), Manchester, UK, August. H. Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of International Conference on New Methods in Language Processing, pages 44–49, Manchester, UK, September. T. Schmidt, C. Chiarcos, T. Lehmberg, G. Rehm, A. Witt, and E. Hinrichs. 2006. Avoiding data graveyards: From heterogeneous data collected in multiple research projects to sustainable linguistic resources. In Proceedings of the E-MELD workshop on Digital Language Documentation: Tools and Standards: The State of the Art, East Lansing, Michigan, US, June. S. Sharoff, M. Kopotev, T. Erjavec, A. Feldman, and D. Divjak. 2008. Designing and evaluating Russian tagsets. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco, May. E. Sirin, B. Parsia, B.C. Grau, A. Kalyanpur, and Y. Katz. 2007. Pellet: A practical OWL/DL reasoner. Web Semantics: Science, Services and Agents on the World Wide Web, 5(2):51–53. W. Skut, T. Brants, B. Krenn, and H. Uszkoreit. 1998. A linguistically interpreted corpus of German newspaper text. In In Proceedings of the ESSLLI Workshop on Recent Advances in Corpus Annotation, Saarbr¨ucken, Germany, August. M. Stede. 2004. The Potsdam Commentary Corpus. In Proceedings of the 2004 ACL Workshop on Discourse Annotation, pages 96–102, Barcelona, Spain, July. P. Tapanainen and T. J¨arvinen. 1997. A nonprojective dependency parser. In Proceedings of the 5th Conference on Applied Natural Language Processing, pages 64–71, Washington, DC, April. K. Toutanova, D. Klein, C.D. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (HLT-NAACL 2003), Edmonton, Canada, May. D. Tufis¸. 2000. Using a large set of EAGLEScompliant morpho-syntactic descriptors as a tagset for probabilistic tagging. In Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC 2000), pages 1105–1112, Athens, Greece, May, 31st – June, 2nd. A. Wagner and B. Zeisler. 2004. A syntactically annotated corpus of Tibetan. In Fourth International Conference on Language Resources and Evaluation (LREC 2004), Lisboa, Portugal, May. J. Zavrel and W. Daelemans. 2000. Bootstrapping a tagged corpus through combination of existing heterogeneous taggers. In Proceedings of the 2nd International Conference on Language Resources and Evaluation (LREC 2000), Athens, Greece, May, 31st – June, 2nd. A. Zielinski and C. Simon. 2008. Morphisto: An open-source morphological analyzer for German. In Proceedings of the Conference on Finite State Methods in Natural Language Processing (FSMNLP), Ispra, Italy, September. 670
2010
68
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 671–677, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Temporal information processing of a new language: fast porting with minimal resources Francisco Costa and Ant´onio Branco Universidade de Lisboa Abstract We describe the semi-automatic adaptation of a TimeML annotated corpus from English to Portuguese, a language for which TimeML annotated data was not available yet. In order to validate this adaptation, we use the obtained data to replicate some results in the literature that used the original English data. The fact that comparable results are obtained indicates that our approach can be used successfully to rapidly create semantically annotated resources for new languages. 1 Introduction Temporal information processing is a topic of natural language processing boosted by recent evaluation campaigns like TERN2004,1 TempEval-1 (Verhagen et al., 2007) and the forthcoming TempEval-22 (Pustejovsky and Verhagen, 2009). For instance, in the TempEval-1 competition, three tasks were proposed: a) identifying the temporal relation (such as overlap, before or after) holding between events and temporal entities such as dates, times and temporal durations denoted by expressions (i.e. temporal expressions) occurring in the same sentence; b) identifying the temporal relation holding between events expressed in a document and its creation time; c) identifying the temporal relation between the main events expressed by two adjacent sentences. Supervised machine learning approaches are pervasive in the tasks of temporal information processing. Even when the best performing systems in these competitions are symbolic, there are machine learning solutions with results close to their performance. In TempEval-1, where there were statistical and rule-based systems, almost 1http://timex2.mitre.org 2http://www.timeml.org/tempeval2 all systems achieved quite similar results. In the TERN2004 competition (aimed at identifying and normalizing temporal expressions), a symbolic system performed best, but since then machine learning solutions, such as (Ahn et al., 2007), have appeared that obtain similar results. These evaluations made available sets of annotated data for English and other languages, used for training and evaluation. One natural question to ask is whether it is feasible to adapt the training and test data made available in these competitions to other languages, for which no such data still exist. Since the annotations are largely of a semantic nature, not many changes need to be done in the annotations once the textual material is translated. In essence, this would be a fast way to create temporal information processing systems for languages for which there are no annotated data yet. In this paper, we report on an experiment that consisted in adapting the English data of TempEval-1 to Portuguese. The results of machine learning algorithms over the data thus obtained are compared to those reported for the English TempEval-1 competition. Since the results are quite similar, this permits to conclude that such an approach can rapidly generate relevant and comparable data and is useful when porting temporal information processing solutions to new languages. The advantages of adapting an existing corpus instead of annotating text from scratch are: i) potentially less time consuming, if it is faster to translate the original text than it is to annotate new text (this can be the case if the annotations are semantic and complex); b) the annotations can be transposed without substantial modifications, which is the case if they are semantic in nature; c) less man power required: text annotation requires multiple annotators in order to guarantee the quality of the annotation tags, translation of the markables and transposition of the annotations 671 in principle do not; d) the data obtained are comparable to the original data in all respects except for language: genre, domain, size, style, annotation decisions, etc., which allows for research to be conducted with a derived corpus that is comparable to research using the original corpus. There is of course the caveat that the adaptation process can introduce errors. This paper proceeds as follows. In Section 2, we provide a quick overview of the TimeML annotations in the TempEval-1 data. In Section 3, it is described how the data were adapted to Portuguese. Section 4 contains a brief quantitative comparison of the two corpora. In Section 5, the results of replicating one of the approaches present in the TempEval-1 challenge with the Portuguese data are presented. We conclude this paper in Section 6. 2 Brief Description of the Annotations Figure 1 contains an example of a document from the TempEval-1 corpus, which is similar to the TimeBank corpus (Pustejovsky et al., 2003). In this corpus, event terms are tagged with <EVENT>. The relevant attributes are tense, aspect, class, polarity, pos, stem. The stem is the term’s lemma, and pos is its part-ofspeech. Grammatical tense and aspect are encoded in the features tense and aspect. The attribute polarity takes the value NEG if the event term is in a negative syntactic context, and POS otherwise. The attribute class contains several levels of information. It makes a distinction between terms that denote actions of speaking, which take the value REPORTING and those that do not. For these, it distinguishes between states (value STATE) and non-states (value OCCURRENCE), and it also encodes whether they create an intensional context (value I STATE for states and value I ACTION for non-states). Temporal expressions (timexes) are inside <TIMEX3> elements. The most important features for these elements are value, type and mod. The timex’s value encodes a normalized representation of this temporal entity, its type can be e.g. DATE, TIME or DURATION. The mod attribute is optional. It is used for expressions like early this year, which are annotated with mod="START". As can be seen in Figure 1 there are other attributes for timexes that encode whether it is the document’s creation time (functionInDocument) and whether its value can be determined from the expression alone or requires other sources of information (temporalFunction and anchorTimeID). The <TLINK> elements encode temporal relations. The attribute relType represents the type of relation, the feature eventID is a reference to the first argument of the relation. The second argument is given by the attribute relatedToTime (if it is a time interval or duration) or relatedToEvent (if it is another event; this is for task C). The task feature is the name of the TempEval-1 task to which this temporal relation pertains. 3 Data Adaptation We cleaned all TimeML markup in the TempEval-1 data and the result was fed to the Google Translator Toolkit.3 This tool combines machine translation with a translation memory. A human translator corrected the proposed translations manually. After that, we had the three collections of documents (the TimeML data, the English unannotated data and the Portuguese unannotated data) aligned by paragraphs (we just kept the line breaks from the original collection in the other collections). In this way, for each paragraph in the Portuguese data we know all the corresponding TimeML tags in the original English paragraph. We tried using machine translation software (we used GIZA++ (Och and Ney, 2003)) to perform word alignment on the unannotated texts, which would have enabled us to transpose the TimeML annotations automatically. However, word alignment algorithms have suboptimal accuracy, so the results would have to be checked manually. Therefore we abandoned this idea, and instead we simply placed the different TimeML markup in the correct positions manually. This is possible since the TempEval-1 corpus is not very large. A small script was developed to place all relevant TimeML markup at the end of each paragraph in the Portuguese text, and then each tag was manually repositioned. Note that the <TLINK> elements always occur at the end of each document, each in a separate line: therefore they do not need to be repositioned. During this manual repositioning of the annotations, some attributes were also changed man3http://translate.google.com/toolkit 672 <?xml version="1.0" ?> <TempEval> ABC<TIMEX3 tid="t52" type="DATE" value="1998-01-14" temporalFunction="false" functionInDocument="CREATION_TIME">19980114</TIMEX3>.1830.0611 NEWS STORY <s>In Washington <TIMEX3 tid="t53" type="DATE" value="1998-01-14" temporalFunction="true" functionInDocument="NONE" anchorTimeID="t52">today</TIMEX3>, the Federal Aviation Administration <EVENT eid="e1" class="OCCURRENCE" stem="release" aspect="NONE" tense="PAST" polarity="POS" pos="VERB">released </EVENT> air traffic control tapes from <TIMEX3 tid="t54" type="TIME" value="1998-XX-XXTNI" temporalFunction="true" functionInDocument="NONE" anchorTimeID="t52">the night</TIMEX3> the TWA Flight eight hundred <EVENT eid="e2" class="OCCURRENCE" stem="go" aspect="NONE" tense="PAST" polarity="POS" pos="VERB">went</EVENT>down.</s> ... <TLINK lid="l1" relType="BEFORE" eventID="e2" relatedToTime="t53" task="A"/> <TLINK lid="l2" relType="OVERLAP" eventID="e2" relatedToTime="t54" task="A"/> <TLINK lid="l4" relType="BEFORE" eventID="e2" relatedToTime="t52" task="B"/> ... </TempEval> Figure 1: Extract of a document contained in the training data of the first TempEval-1 ually. In particular, the attributes stem, tense and aspect of <EVENT> elements are language specific and needed to be adapted. Sometimes, the pos attribute also needs to be changed, since e.g. a verb in English can be translated as a noun in Portuguese. The attribute class of the same kind of elements can be different, too, because natural sounding translations are sometimes not literal. 3.1 Annotation Decisions When porting the TimeML annotations from English to Portuguese, a few decisions had to be made. For illustration purposes, Figure 2 contains the Portuguese equivalent of the extract presented in Figure 1. For <TIMEX3> elements, the issue is that if the temporal expression to be annotated is a prepositional phrase, the preposition should not be inside the <TIMEX3> tags according to the TimeML specification. In the case of Portuguese, this raises the question of whether to leave contractions of prepositions with determiners outside these tags (in the English data the preposition is outside and the determiner is inside).4 We chose to leave them outside, as can be seen in that Figure. In this example the prepositional phrase from the night/da noite is annotated with the English noun phrase the night inside the <TIMEX3> element, but the Portuguese version only contains the noun noite inside those tags. For <EVENT> elements, some of the attributes are adapted. The value of the attribute stem is 4The fact that prepositions are placed outside of temporal expressions seems odd at first, but this is because in the original TimeBank, from which the TempEval data were derived, they are tagged as <SIGNAL>s. The TempEval-1 data does not contain <SIGNAL> elements, however. obviously different in Portuguese. The attributes aspect and tense have a different set of possible values in the Portuguese data, simply because the morphology of the two languages is different. In the example in Figure 1 the value PPI for the attribute tense stands for pret´erito perfeito do indicativo. We chose to include mood information in the tense attribute because the different tenses of the indicative and the subjunctive moods do not line up perfectly as there are more tenses for the indicative than for the subjunctive. For the aspect attribute, which encodes grammatical aspect, we only use the values NONE and PROGRESSIVE, leaving out the values PERFECTIVE and PERFECTIVE PROGRESSIVE, as in Portuguese there is no easy match between perfective aspect and grammatical categories. The attributes of <TIMEX3> elements carry over to the Portuguese corpus unchanged, and the <TLINK> elements are taken verbatim from the original documents. 4 Data Description The original English data for TempEval-1 are based on the TimeBank data, and they are split into one dataset for training and development and another dataset for evaluation. The full data are organized in 182 documents (162 documents in the training data and another 20 in the test data). Each document is a news report from television broadcasts or newspapers. A large amount of the documents (123 in the training set and 12 in the test data) are taken from a 1989 issue of the Wall Street Journal. The training data comprise 162 documents with 673 <?xml version="1.0" encoding="UTF-8" ?> <TempEval> ABC<TIMEX3 tid="t52" type="DATE" value="1998-01-14" temporalFunction="false" functionInDocument="CREATION_TIME">19980114</TIMEX3>.1830.1611 REPORTAGEM <s>Em Washington, <TIMEX3 tid="t53" type="DATE" value="1998-01-14" temporalFunction="true" functionInDocument="NONE" anchorTimeID="t52">hoje</TIMEX3>, a Federal Aviation Administration <EVENT eid="e1" class="OCCURRENCE" stem="publicar" aspect="NONE" tense="PPI" polarity="POS" pos="VERB">publicou </EVENT> gravaoes do controlo de trfego areo da <TIMEX3 tid="t54" type="TIME" value="1998-XX-XXTNI" temporalFunction="true" functionInDocument="NONE" anchorTimeID="t52">noite</TIMEX3> em que o voo TWA800 <EVENT eid="e2" class="OCCURRENCE" stem="cair" aspect="NONE" tense="PPI" polarity="POS" pos="VERB">caiu </EVENT> .</s> ... <TLINK lid="l1" relType="BEFORE" eventID="e2" relatedToTime="t53" task="A"/> <TLINK lid="l2" relType="OVERLAP" eventID="e2" relatedToTime="t54" task="A"/> <TLINK lid="l4" relType="BEFORE" eventID="e2" relatedToTime="t52" task="B"/> ... </TempEval> Figure 2: Extract of a document contained in the Portuguese data 2,236 sentences (i.e. 2236 <s> elements) and 52,740 words. It contains 6799 <EVENT> elements, 1,244 <TIMEX3> elements and 5,790 <TLINK> elements. Note that not all the events are included here: the ones expressed by words that occur less than 20 times in TimeBank were removed from the TempEval-1 data. The test dataset contains 376 sentences and 8,107 words. The number of <EVENT> elements is 1,103; there are 165 <TIMEX3>s and 758 <TLINK>s. The Portuguese data of course contain the same (translated) documents. The training dataset has 2,280 sentences and 60,781 words. The test data contains 351 sentences and 8,920 words. 5 Comparing the two Datasets One of the systems participating in the TempEval-1 competition, the USFD system (Hepple et al., 2007), implemented a very straightforward solution: it simply trained classifiers with Weka (Witten and Frank, 2005), using as attributes information that was readily available in the data and did not require any extra natural language processing (for all tasks, the attribute relType of <TLINK> elements is unknown and must be discovered, but all the other information is given). The authors’ objectives were to see “whether a ‘lite’ approach of this kind could yield reasonable performance, before pursuing possibilities that relied on ‘deeper’ NLP analysis methods”, “which of the features would contribute positively to system performance” and “if any [machine learning] approach was better suited to the TempEval tasks than any other”. In spite of its simplicity, they obtained results quite close to the best systems. For us, the results of (Hepple et al., 2007) are interesting as they allow for a straightforward evaluation of our adaptation efforts, since the same machine learning implementations can be used with the Portuguese data, and then compared to their results. The differences in the data are mostly due to language. Since the languages are different, the distribution of the values of several attributes are different. For instance, we included both tense and mood information in the tense attribute of <EVENT>s, as mentioned in Section 3.1, so instead of seven possible values for this attribute, the Portuguese data contains more values, which can cause more data sparseness. Other attributes affected by language differences are aspect, pos, and class, which were also possibly changed during the adaptation process. One important difference between the English and the Portuguese data originates from the fact that events with a frequency lower than 20 were removed from the English TempEval-1 data. Since there is not a 1 to 1 relation between English event terms and Portuguese event terms, we do not have the guarantee that all event terms in the Portuguese data have a frequency of at least 20 occurrences in the entire corpus.5 The work of (Hepple et al., 2007) reports on both cross-validation results for various classifiers over the training data and evaluation results on the training data, for the English dataset. We we will 5In fact, out of 1,649 different stems for event terms in the Portuguese training data, only 45 occur at least 20 times. 674 Task Attribute A B C EVENT-aspect ! ! ! EVENT-polarity ! ! × EVENT-POS ! ! ! EVENT-stem ! × × EVENT-string × × × EVENT-class × ! ! EVENT-tense × ! ! ORDER-adjacent ! N/A N/A ORDER-event-first ! N/A N/A ORDER-event-between × N/A N/A ORDER-timex-between × N/A N/A TIMEX3-mod ! × N/A TIMEX3-type ! × N/A Table 1: Features used for the English TempEval-1 tasks. N/A means the feature was not applicable to the task, !means the feature was used by the best performing classifier for the task, and × means it was not used by that classifier. From (Hepple et al., 2007). be comparing their results to ours. Our purpose with this comparison is to validate the corpus adaptation. Similar results would not necessarily indicate the quality of the adapted corpus. After all, a word-by-word translation would produce data that would yield similar results, but it would also be a very poor translation, and therefore the resulting corpus would not be very interesting. The quality of the translation is not at stake here, since it was manually revised. But similar results would indicate that the obtained data are comparable to the original data, and that they are similarly useful to tackle the problem for which the original data were collected. This would confirm our hypothesis that adapting an existing corpus can be an effective way to obtain new data for a different language. 5.1 Results for English The attributes employed for English by (Hepple et al., 2007) are summarized in Table 1. The class is the attribute relType of <TLINK> elements. The EVENT features are taken from <EVENT> elements. The EVENT-string attribute is the character data inside the element. The other attributes correspond to the feature of <EVENT> with the same name. The TIMEX3 features Task Algorithm A B C baseline 49.8 62.1 42.0 lazy.KStar 58.2 76.7 54.0 rules.DecisionTable 53.3 79.0 52.9 functions.SMO 55.1 78.1 55.5 rules.JRip 50.7 78.6 53.4 bayes.NaiveBayes 56.3 76.2 50.7 Table 2: Performance of several machine learning algorithms on the English TempEval-1 training data, with cross-validation. The best result for each task is in boldface. From (Hepple et al., 2007). also correspond to attributes of the relevant <TIMEX3> element. The ORDER features are boolean and computed as follows: • ORDER-event-first is whether the <EVENT> element occurs in the text before the <TIMEX3> element; • ORDER-event-between is whether an <EVENT> element occurs in the text between the two temporal entities being ordered; • ORDER-timex-between is the same, but for temporal expressions; • ORDER-adjacent is whether both ORDER-event-between and ORDERtimex-between are false (but other textual data may occur between the two entities). Cross-validation over the training data produced the results in Table 2. The baseline used is the majority class baseline, as given by Weka’s rules.ZeroR implementation. The lazy.KStar algorithm is a nearest-neighbor classifier that uses an entropybased measure to compute instance similarity. Weka’s rules.DecisionTablealgorithm assigns to an unknown instance the majority class of the training examples that have the same attribute values as that instance that is being classified. functions.SMO is an implementation of Support Vector Machines (SVM), rules.JRip is the RIPPER algorithm, and bayes.NaiveBayes is a Naive Bayes classifier. 675 Task Algorithm A B C baseline 49.8 62.1 42.0 lazy.KStar 57.4 77.7 53.3 rules.DecisionTable 54.2 78.1 51.6 functions.SMO 55.5 79.3 56.8 rules.JRip 52.1 77.6 52.1 bayes.NaiveBayes 56.0 78.2 53.5 trees.J48 55.6 79.0 59.3 Table 3: Performance of several machine learning algorithms on the Portuguese data for the TempEval-1 tasks. The best result for each task is in boldface. 5.2 Attributes We created a small script to convert the XML annotated files into CSV files, that can be read by Weka. In this process, we included the same attributes as the USFD authors used for English. For task C, (Hepple et al., 2007) are not very clear whether the EVENT attributes used were related to just one of the two events being temporally related. In any case, we used two of each of the EVENT attributes, one for each event in the temporal relation to be determined. So, for instance, an extra attribute EVENT2-tense is where the tense of the second event in the temporal relation is kept. 5.3 Results The majority class baselines produce the same results as for English. This was expected: the class distribution is the same in the two datasets, since the <TLINK> elements were copied to the adapted corpus without any changes. For the sake of comparison, we used the same classifiers as (Hepple et al., 2007), and we used the attributes that they found to work best for English (presented above in Table 1). The results for the Portuguese dataset are in Table 3, using 10-fold cross-validation on the training data. We also present the results for Weka’s implementation of the C4.5 algorithm, to induce decision trees. The motivation to run this algorithm over these data is that decision trees are human readable and make it easy to inspect what decisions the classifier is making. This is also true of rules.JRip. The results for the decision trees are in this table, too. The results obtained are almost identical to the results for the original dataset in English. The best performing classifier for task A is the same as for English. For task B, Weka’s functions.SMO produced better results with the Portuguese data than rules.DecisionTable, the best performing classifier with the English data for this task. In task C, the SVM algorithm was also the best performing algorithm among those that were also tried on the English data, but decision trees produced even better results here. For English, the best performing classifier for each task on the training data, according to Table 2, was used for evaluation on the test data: the results showed a 59% F-measure for task A, 73% for task B, and 54% for task C. Similarly, we also evaluated the best algorithm for each task (according to Table 3) with the Portuguese test data, after training it on the entire training dataset. The results are: in task A the lazy.KStar classifier scored 58.6%, and the SVM classifier scored 75.5% in task B and 59.4% in task C, with trees.J48 scoring 61% in this task. The results on the test data are also fairly similar for the two languages/datasets. We inspected the decision trees and rule sets produced by trees.J48 and rules.JRip, in order to see what the classifiers are doing. Task B is probably the easiest task to check this way, because we expect grammatical tense to be highly predictive of the temporal order between an event and the document’s creation time. And, indeed, the top of the tree induced by trees.J48 is quite interesting: eTense = PI: OVERLAP (388.0/95.0) eTense = PPI: BEFORE (1051.0/41.0) Here, eTense is the EVENT-tense attribute of <EVENT> elements, PI stands for present indicative, and PPI is past indicative (pret´erito perfeito do indicativo). In general, one sees past tenses associated with the BEFORE class and future tenses associated with the AFTER class (including the conditional forms of verbs). Infinitives are mostly associated with the AFTER class, and present subjunctive forms with AFTER and OVERLAP. Figure 3 shows the rule set induced by the RIPPER algorithm. The classifiers for the other tasks are more difficult to inspect. For instance, in task A, the event term and the temporal expression that denote the entities that are to be ordered may not even be directly syntactically related. Therefore, it is hard to 676 (eClass = OCCURRENCE) and ( eTense = INF) and ( ePolarity = POS) => lRelType= AFTER (183.0/77.0) ( eTense = FI) => lRelType= AFTER (55.0/10.0) (eClass = OCCURRENCE) and ( eTense = IR-PI+INF) => lRelType= AFTER (26.0/4.0) (eClass = OCCURRENCE) and ( eTense = PC) => lRelType= AFTER (15.0/3.0) (eClass = OCCURRENCE) and ( eTense = C) => lRelType= AFTER (17.0/2.0) ( eTense = PI) => lRelType= OVERLAP (388.0/95.0) (eClass = ASPECTUAL) and ( eTense = PC) => lRelType= OVERLAP (9.0/2.0) => lRelType= BEFORE (1863.0/373.0) Figure 3: rules.JRip classifier induced for task B. INF stands for infinitive, FI is future indicative, IR-PI+INF is an infinitive form following a present indicative form of the verb ir (to go), PC is present subjunctive, C is conditional, PI is present indicative. see how interesting the inferred rules are, because we do not know what would be interesting in this scenario. In any case, the top of the induced tree for task A is: oAdjacent = True: OVERLAP (554.0/128.0) Here, oAdjacent is the ORDER-adjacent attribute. Assuming this attribute is an indication that the event term and the temporal expression are related syntactically, it is interesting to see that the typical temporal relation between the two entities in this case is an OVERLAP relation. The rest of the tree is much more ad-hoc, making frequent use of the stem attribute of <EVENT> elements, suggesting the classifier is memorizing the data. Task C, where two events are to be ordered, produced more complicated classifiers. Generally the induced rules and the tree paths compare the tense and the class of the two event terms, showing some expected heuristics (such as, if the tense of the first event is future and the tense of the second event is past, assign AFTER). But there are also many several rules for which we do not have clear intuitions. 6 Discussion In this paper, we described the semi-automatic adaptation of a TimeML annotated corpus from English to Portuguese, a language for which TimeML annotated data was not available yet. Because most of the TimeML annotations are semantic in nature, they can be transposed to a translation of the original corpus, with few adaptations being required. In order to validate this adaptation, we used the obtained data to replicate some results in the literature that used the original English data. The results for the Portuguese data are very similar to the ones for English. This indicates that our approach to adapt existing annotated data to a different language is fruitful. References David Ahn, Joris van Rantwijk, and Maarten de Rijke. 2007. A cascaded machine learning approach to interpreting temporal expressions. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 420–427, Rochester, New York, April. Association for Computational Linguistics. Mark Hepple, Andrea Setzer, and Rob Gaizauskas. 2007. USFD: Preliminary exploration of features and classifiers for the TempEval-2007 tasks. In Proceedings of SemEval-2007, pages 484–487, Prague, Czech Republic. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. James Pustejovsky and Marc Verhagen. 2009. Semeval-2010 task 13: evaluating events, time expressions, and temporal relations (tempeval-2). In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions, pages 112–116, Boulder, Colorado. Association for Computational Linguistics. James Pustejovsky, Patrick Hanks, Roser Saur´ı, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, and Marcia Lazo. 2003. The TIMEBANK corpus. In Proceedings of Corpus Linguistics 2003, pages 647–656. M. Verhagen, R. Gaizauskas, F. Schilder, M. Hepple, and J. Pustejovsky. 2007. SemEval-2007 Task 15: TempEval temporal relation identification. In Proceedings of SemEval-2007. Ian H. Witten and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Francisco. second edition. 677
2010
69
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 60–68, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Correcting errors in speech recognition with articulatory dynamics Frank Rudzicz University of Toronto, Department of Computer Science Toronto, Ontario, Canada [email protected] Abstract We introduce a novel mechanism for incorporating articulatory dynamics into speech recognition with the theory of task dynamics. This system reranks sentencelevel hypotheses by the likelihoods of their hypothetical articulatory realizations which are derived from relationships learned with aligned acoustic/articulatory data. Experiments compare this with two baseline systems, namely an acoustic hidden Markov model and a dynamic Bayes network augmented with discretized representations of the vocal tract. Our system based on task dynamics reduces worderror rates significantly by 10.2% relative to the best baseline models. 1 Introduction Although modern automatic speech recognition (ASR) takes several cues from the biological perception of speech, it rarely models its biological production. The result is that speech is treated as a surface acoustic phenomenon with lexical or phonetic hidden dynamics but without any physical constraints in between. This omission leads to some untenable assumptions. For example, speech is often treated out of convenience as a sequence of discrete, non-overlapping packets, such as phonemes, despite the fact that some major difficulties in ASR, such as co-articulation, are by definition the result of concurrent physiological phenomena (Hardcastle and Hewlett, 1999). Many acoustic ambiguities can be resolved with knowledge of the vocal tract’s configuration (O’Shaughnessy, 2000). For example, the three nasal sonorants, /m/, /n/, and /ng/, are acoustically similar (i.e., they have large concentrations of energy at the same frequencies) but uniquely and reliably involve bilabial closure, tongue-tip elevation, and tongue-dorsum elevation, respectively. Having access to the articulatory goals of the speaker would, in theory, make the identification of linguistic intent almost trivial. Although we don’t typically have access to the vocal tract during speech recognition, its configuration can be estimated reasonably well from acoustics alone within adequate models or measurements of the vocal tract (Richmond et al., 2003; Toda et al., 2008). Evidence that such inversion takes place naturally in humans during speech perception suggests that the discriminability of speech sounds depends powerfully on their production (Liberman and Mattingly, 1985; D’Ausilio et al., 2009). This paper describes the use of explicit models of physical speech production within recognition systems. Initially, we augment traditional models of ASR with probabilistic relationships between acoustics and articulation learned from appropriate data. This leads to the incorporation of a highlevel, goal-oriented, and control-based theory of speech production within a novel ASR system. 2 Background and related work The use of theoretical (phonological) features of the vocal tract has provided some improvement over traditional acoustic ASR systems in phoneme recognition with neural networks (Kirchhoff, 1999; Roweis, 1999), but there has been very little work in ASR informed by direct measurements of the vocal tract. Recently, Markov et al. (2006) have augmented hidden Markov models with Bayes networks trained to describe articulatory constraints from a small amount of Japanese vocal tract data, resulting in a small phonemeerror reduction. This work has since been expanded upon to inform ASR systems sensitive to physiological speech disorders (Rudzicz, 2009). Common among previous efforts is an interpretation of speech as a sequence of short, instantaneous observations devoid of long-term dynamics. 60 2.1 Articulatory phonology Articulatory phonology bridges the divide between the physical manifestation of speech and its underlying lexical intentions. Within this discipline, the theory of task dynamics is a combined model of physical articulator motion and the planning of abstract vocal tract configurations (Saltzman, 1986). This theory introduces the notion that all observed patterns of speech are the result of overlapping gestures, which are abstracted goaloriented reconfigurations of the vocal tract, such as bilabial closure or velar opening (Saltzman and Munhall, 1989). Each gesture occurs within one of the following tract variables (TVs): velar opening (VEL), lip aperture (LA) and protrusion (LP), tongue tip constriction location (TTCL) and degree (TTCD) 1, tongue body constriction location (TBCL) and degree (TBCD), lower tooth height (LTH), and glottal vibration (GLO). For example, the syllable pub consists of an onset (/p/), a nucleus (/ah/), and a coda (/b/). Four gestural goals are associated with the onset, namely the shutting of GLO and of VEL, and the closure and release of LA. Similarly, the nucleus of the syllable consists of three goals, namely the relocation of TBCD and TBCL, and the opening of GLO. The presence and extent of these gestural goals are represented by filled rectangles in figure 1. Inter-gestural timings between these goals are specified relative to one another according to human data as described by Nam and Saltzman (2003). TBCD closed open GLO open closed LA open closed 100 200 300 400 Time (ms) Figure 1: Canonical example pub from Saltzman and Munhall (1989). The presence of these discrete goals influences the vocal tract dynamically and continuously as modelled by the following non-homogeneous second-order linear differential equation: Mz′′ +Bz′ +K(z−z∗) = 0. (1) 1Constriction locations generally refer to the front-back dimension of the vocal tract and constriction degrees generally refer to the top-down dimension. Here, z is a continuous vector representing the instantaneous positions of the nine tract variables, z∗is the target (equilibrium) positions of those variables, and vectors z′ and z′′ represent the first and second derivatives of z with respect to time (i.e., velocity and acceleration), respectively. The matrices M, B, and K are syllable-specific coefficients describing the inertia, damping, and stiffness, respectively, of the virtual gestures. Generally, this theory assumes that the tract variables are mutually independent, and that the system is critically damped (i.e., the tract variables do not oscillate around their equilibrium positions) (Nam and Saltzman, 2003). The continuous state, z, of equation (1) is exemplified by black curves in figure 1. 2.2 Articulatory data Tract variables provide the dimensions of an abstract gestural space independent of the physical characteristics of the speaker. In order to complete our articulatory model, however, we require physical data from which to infer these high-level articulatory goals. Electromagnetic articulography (EMA) is a method to measure the motion of the vocal tract during speech. In EMA, the speaker is placed within a low-amplitude electromagnetic field produced within a cube of a known geometry. Tiny sensors within this field induce small electric currents whose energy allows the inference of articulator positions and velocities to within 1 mm of error (Yunusova et al., 2009). We derive data for the following study from two EMA sources: • The University of Edinburgh’s MOCHA database, which provides phoneticallybalanced sentences repeated from TIMIT (Zue et al., 1989) uttered by a male and a female speaker (Wrench, 1999), and • The University of Toronto’s TORGO database, from which we select sentences repeated from TIMIT from two females and three males (Rudzicz et al., 2008). (Cerebrally palsied speech, which is the focus of this database, is not included here). For the following study we use the eight 2D positions common to both databases, namely the upper lip (UL), lower lip (LL), upper incisor (UI), lower incisor (LI), tongue tip (TT), tongue blade (TB), and tongue dorsum (TD). Since these positions are recorded in 3D in TORGO, we project 61 these onto the midsagittal plane. (Additionally, the MOCHA database provides velum (V) data on this plane, and TORGO provides the left and right lip corners (LL and RL) but these are excluded from study except where noted). All articulatory data is aligned with its associated acoustic data, which is transformed to Melfrequency cepstral coefficients (MFCCs). Since the 2D EMA system in MOCHA and the 3D EMA system in TORGO differ in their recording rates, the length of each MFCC frame in each database must differ in order to properly align acoustics with articulation in time. Therefore, each MFCC frame covers 16 ms in the TORGO database, and 32 ms in MOCHA. Phoneme boundaries are determined automatically in the MOCHA database by forced alignment, and by a speech-language pathologist in the TORGO database. We approximate the tract variable space from the physical space of the articulators, in general, through principal component analysis (PCA) on the latter, and subsequent sigmoid normalization on [0,1]. For example, the LTH tract variable is inferred by calculating the first principal component of the two-dimensional lower incisor (LI) motion in the midsagittal plane, and by normalizing the resulting univariate data through a scaled sigmoid. The VEL variable is inferred similarly from velum (V) EMA data. Tongue tip constriction location and degree (TTCL and TTCD, respectively) are inferred from the 1st and 2nd principal components of tongue tip (TT) EMA data, with TBCL and TBCD inferred similarly from tongue body (TB) data. Finally, the glottis (GLO) is inferred by voicing detection on acoustic energy below 150 Hz (O’Shaughnessy, 2000), lip aperture (LA) is the normalized Euclidean distance between the lips, and lip protrusion (LP) is the normalized 2nd principal component of the midpoint between the lips. All PCA is performed without segmentation of the data. The result is a low-dimensional set of continuous curves describing goal-relevant articulatory variables. Figure 2, for example, shows the degree of the lip aperture (LA) over time for all instances of the /b/ phoneme in the MOCHA database. The relevant articulatory goal of lip closure is evident. 3 Baseline systems We now turn to the task of speech recognition. Traditional Bayesian learning is restricted to universal or immutable relationships, and is agnos0 50 100 150 200 0 0.2 0.4 0.6 0.8 1 Time (ms) normalized LA Figure 2: Lip aperture (LA) over time during all MOCHA instances of /b/. tic towards dynamic systems or time-varying relationships. Dynamic Bayes networks (DBNs) are directed acyclic graphs that generalize the powerful stochastic mechanisms of Bayesian representation to temporal sequences. We are free to explicitly provide topological (i.e., dependency) relationships between relevant variables in our models, which can include measurements of tract data. We examine two baseline systems. The first is the standard acoustic hidden Markov model (HMM) augmented with a bigram language model, as shown in figure 3(a). Here, Wt →Wt+1 represents word transition probabilities, learned by maximum likelihood estimation, and Pht → Pht+1 represents phoneme transition probabilities whose order is explicitly specified by the relationship Wt →Pht. Likewise, each phoneme Ph conditions the sub-phoneme state, Qt, whose transition probabilities Qt →Qt+1 describe the dynamics within phonemes. The variable Mt refers to hidden Gaussian indices so that the likelihoods of acoustic observations, Ot, are represented by a mixture of 4, 8, 16, or 32 Gaussians for each state and each phoneme. See Murphy (2002) for a further description of this representation. The second baseline model is the articulatory dynamic Bayes network (DBN-A). This augments the standard acoustic HMM by replacing hidden indices, Mt, with discrete observations of the vocal tract, Kt, as shown in figure 3(b). The pattern of acoustics within each phoneme is dependent on a relatively restricted set of possible articulatory configurations (Roweis, 1999). To find these discrete positions, we obtain k vectors that best de62 scribe the articulatory data according to k-means clustering with the sum-of-squares error function. During training, the DBN variable Kt is set explicitly to the index of the mean vector nearest to the current frame of EMA data at time t. In this way, the relationship Kt →Ot allows us to learn how discretized articulatory configurations affect acoustics. The training of DBNs involves a specialized version of expectation-maximization, as described in the literature (Murphy, 2002; Ghahramani, 1998). During inference, variables Wt, Pht, and Kt become hidden and we marginalize over their possible values when computing their likelihoods. Bigrams are computed by maximum likelihood on lexical annotations in the training data. Mt Ot Mt+1 Ot+1 Qt Pht Qt+1 Pht+1 Wt Wt+1 (a) HMM Kt Ot Kt+1 Ot+1 Qt Pht Qt+1 Pht+1 Wt Wt+1 (b) DBN-A Figure 3: Baseline systems: (a) acoustic hidden Markov model and (b) articulatory dynamic Bayes network. Node Wt represents the current word, Pht is the current phoneme, Qt is that phoneme’s dynamic state, Ot is the acoustic observation, Mt is the Gaussian mixture component, and Kt is the discretized articulatory configuration. Filled nodes represent observed variables during training, although only Ot is observed during recognition. Square nodes are discrete variables while circular nodes are continuous variables. 4 Switching Kalman filter Our first experimental system attempts speech recognition given only articulatory data. The true state of the tract variables at time t −1 constitutes a 9-dimensional vector, xt−1, of continuous values. Under the task dynamics model of section 2.1, the motions of these tract variables obey critically damped second-order oscillatory relationships. We start with the simplifying assumption of linear dynamics here with allowances for random Gaussian process noise, vt, since articulatory behaviour is non-deterministic. Moreover, we know that EMA recordings are subject to some error (usually less than 1 mm (Yunusova et al., 2009)), so the actual observation at time t, yt, will not in general be the true position of the articulators. Assuming that the relationship between yt and xt is also linear, and that the measurement noise, wt, is also Gaussian, then the dynamical articulatory system can be described by xt = Dtxt−1 +vt yt = Ctxt +wt. (2) Eqs. 2 form the basis of the Kalman filter which allows us to use EMA measurements directly, rather than quantized abstractions thereof as in the DBN-A model. Obviously, since articulatory dynamics vary significantly for different goals, we replicate eq. (2) for each phoneme and connect these continuous Kalman filters together with discrete conditioning variables for phoneme and word, resulting in the switching Kalman filter (SKF) model. Here, parameters Dt and vt are implicit in the relationship xt →xt+1, and parameters Ct and wt are implicit in xt →yt. In this model, observation yt is the instantaneous measurements derived from EMA, and xt is their true hidden states. These parameters are trained using expectation-maximization, as described in the literature (Murphy, 1998; Deng et al., 2005). 5 Recognition with task dynamics Our goal is to integrate task dynamics within an ASR system for continuous sentences called TDASR. Our approach is to re-rank an N-best list of sentence hypotheses according to a weighted likelihood of their articulatory realizations. For example, if a word sequence Wi : wi,1 wi,2 ... wi,m has likelihoods LX(Wi) and LΛ(Wi) according to purely acoustic and articulatory interpretations of an utterance, respectively, then its overall score would be L(Wi) = αLX(Wi)+(1−α)LΛ(Wi) (3) given a weighting parameter α set manually, as in section 6.2. Acoustic likelihoods LX(Wi) are obtained from Viterbi paths through relevant HMMs in the standard fashion. 5.1 The TADA component In order to obtain articulatory likelihoods, LΛ(Wi), for each word sequence, we first generate articulatory realizations of those sequences according 63 to task dynamics. To this end, we use components from the open-source TADA system (Nam and Goldstein, 2006), which is a complete implementation of task dynamics. From this toolbox, we use the following components: • A syllabic dictionary supplemented with the International Speech Lexicon Dictionary (Hasegawa-Johnson and Fleck, 2007). This breaks word sequences Wi into syllable sequences Si consisting of onsets, nuclei, and coda and covers all of MOCHA and TORGO. • A syllable-to-gesture lookup table. Given a syllabic sequence, Si, this table provides the gestural goals necessary to produce those syllables. For example, given the syllable pub in figure 1, this table provides the targets for the GLO, VEL, TBCL, and TBCD tract variables, and the parameters for the second-order differential equation, eq. 1, that achieves those goals. These parameters have been empirically tuned by the authors of TADA according to a generic, speakerindependent representation of the vocal tract (Saltzman and Munhall, 1989). • A component that produces the continuous tract variable paths that produce an utterance. This component takes into account various physiological aspects of human speech production, including intergestural and interarticulator co-ordination and timing (Nam and Saltzman, 2003; Goldstein and Fowler, 2003), and the neutral (“schwa”) forces of the vocal tract (Saltzman and Munhall, 1989). This component takes a sequence of gestural goals predicted by the segment-to-gesture lookup table, and produces appropriate paths for each tract variable. The result of the TADA component is a set of N 9-dimensional articulatory paths, TVi, necessary to produce the associated word sequences, Wi for i = 1..N. Since task dynamics is a prescriptive model and fully deterministic, TVi sequences are the canonical or default articulatory realizations of the associated sentences. These canonical realizations are independent of our training data, so we transform them in order to more closely resemble the observed articulatory behaviour in our EMA data. Towards this end, we train a switching Kalman filter identical to that in section 4, except the hidden state variable xt is replaced by the observed instantaneous canonical TVs predicted by TADA. In this way we are explicitly learning a relationship between TADA’s task dynamics and human data. Since the lengths of these sequences are generally unequal, we align the articulatory behaviour predicted by TADA with training data from MOCHA and TORGO using standard dynamic time warping (Sakoe and Chiba, 1978). During run-time, the articulatory sequence yt most likely to have been produced by the human data given the canonical sequence TVi is inferred by the Viterbi algorithm through the SKF model with all other variables hidden. The result is a set of articulatory sequences, TV∗ i , for i = 1..N, that represent the predictions of task dynamics that better resemble our data. 5.2 Acoustic-articulatory inversion In order to estimate the articulatory likelihood of an utterance, we need to evaluate each transformed articulatory sequence, TV∗ i , within probability distributions ranging over all tract variables. These distributions can be inferred using acousticarticulatory inversion. There are a number of approaches to this task, including vector quantization, and expectation-maximization with Gaussian mixtures (Hogden and Valdez, 2001; Toda et al., 2008). These approaches accurately inferred the xy position of articulators to within 0.41 mm and 2.73 mm. Here, we modify the approach taken by Richmond et al. (2003), who estimate probability functions over the 2D midsagittal positions of 7 articulators, given acoustics, with a mixturedensity network (MDN). An MDN is essentially a typical discriminative multi-layer neural network whose output consists of the parameters to Gaussian mixtures. Here, each Gaussian mixture describes a probability function over TV positions given the acoustic frame at time t. For example, figure 4 shows an intensity map of the likely values for tongue-tip constriction degree (TTCD) for each frame of acoustics, superimposed with the ‘true’ trajectory of that TV. Our networks are trained with acoustic and EMA-derived data as described in section 2.2. 5.3 Recognition by reranking During recognition of a test utterance, a standard acoustic HMM produces word sequence hypotheses, Wi, and associated likelihoods, L(Wi), for i = 1..N. The expected canonical motion of the tract variables, TVi is then produced by task dynamics 64 Figure 4: Example probability density of tongue tip constriction degree over time, inferred from acoustics. The true trajectory is superimposed as a black curve. for each of these word sequences and transformed by an SKF to better match speaker data, giving TV∗ i . The likelihoods of these paths are then evaluated within probability distributions produced by an MDN. The mechanism for producing the articulatory likelihood is shown in figure 5. The overall likelihood, L(Wi) = αLX(Wi) + (1 −α)LΛ(Wi), is then used to produce a final hypothesis list for the given acoustic input. 6 Experiments Experimental data is obtained from two sources, as described in section 2.2. We procure 1200 sentences from Toronto’s TORGO database, and 896 from Edinburgh’s MOCHA. In total, there are 460 total unique sentence forms, 1092 total unique word forms, and 11065 total words uttered. Except where noted, all experiments randomly split the data into 90% training and 10% testing sets for 5-cross validation. MOCHA and TORGO data are never combined in a single training set due to differing EMA recording rates. In all cases, models are database-dependent (i.e., all TORGO data is conflated, as is all of MOCHA). For each of our baseline systems, we calculate the phoneme-error-rate (PER) and word-errorrate (WER) after training. The phoneme-errorrate is calculated according to the proportion of frames of speech incorrectly assigned to the proper phoneme. The word-error-rate is calculated as the sum of insertion, deletion, and substitution errors in the highest-ranked hypothesis divided by the total number of words in the correct orthography. The traditional HMM is compared by varying the number of Gaussians used in the modelling System Parameters PER (%) WER (%) HMM |M| = 4 29.3 14.5 |M| = 8 27.0 13.9 |M| = 16 26.1 10.2 |M| = 32 25.6 9.7 DBN-A |K| = 4 26.1 13.0 |K| = 8 25.2 11.3 |K| = 16 24.9 9.8 |K| = 32 24.8 9.4 Table 1: Phoneme- and Word-Error-Rate (PER and WER) for different parameterizations of the baseline systems. No. of Gaussians 1 2 3 4 LTH −0.28 −0.18 −0.15 −0.11 LA −0.36 −0.32 −0.30 −0.29 LP −0.46 −0.44 −0.43 −0.43 GLO −1.48 −1.30 −1.29 −1.25 TTCD −1.79 −1.60 −1.51 −1.47 TTCL −1.81 −1.62 −1.53 −1.49 TBCD −0.88 −0.79 −0.75 −0.72 TDCL −0.22 −0.20 −0.18 −0.17 Table 2: Average log likelihood of true tract variable positions in test data, under distributions produced by mixture density networks with varying numbers of Gaussians. of acoustic observations. Similarly, the DBN-A model is compared by varying the number of discrete quantizations of articulatory configurations, as described in section 3. Results are obtained by direct decoding. The average results across both databases, between which there are no significant differences, are shown in table 1. In all cases the DBN-A model outperforms the HMM, which highlights the benefit of explicitly conditioning acoustic observations on articulatory causes. 6.1 Efficacy of TD-ASR components In order to evaluate the whole system, we start by evaluating its parts. First, we test how accurately the mixture-density network (MDN) estimates the position of the articulators given only information from the acoustics available during recognition. Table 2 shows the average log likelihood over each tract variable across both databases. These results are consistent with the state-of-the-art (Toda et al., 2008). In the following experiments, we use MDNs that produce 4 Gaussians. 65 Acoustics ASR ASR MDN MDN W1 W2 ... WN N-best hypotheses TADA TADA TV1 TV2 ... TVN Canonical Tract Variables TRANS TRANS TV*1 TV*2 ... TV*N Modified Tract Variables P(TVi*) W*1 W*2 ... W*N Reranked list Figure 5: The TD-ASR mechanism for deriving articulatory likelihoods, LΛ(Wi), for each word sequence Wi produced by standard acoustic techniques. Manner Canonical Transformed approximant 0.19 0.16 fricative 0.37 0.29 nasal* 0.24 0.18 retroflex 0.23 0.19 plosive 0.10 0.08 vowel 0.27 0.25 Table 3: Average difference between predicted tract variables and observed data, on [0,1] scale. (*) Nasals are evaluated only with MOCHA data, since TORGO data lacks velum measurements. We evaluate how closely transformations to the canonical tract variables predicted by TADA match the data. Namely, we input the known orthography for each test utterance into TADA, obtain the predicted canonical tract variables TV, and transform these according to our trained SKF. The resulting predicted and transformed sequences are aligned with our measurements derived from EMA with dynamic time warping. Finally, we measure the average difference between the observed data and the predicted (canonical and transformed) tract variables. Table 3 shows these differences according to the phonological manner of articulation. In all cases the transformed tract variable motion is more accurate, and significantly so at the 95% confidence level for nasal and retroflex phonemes, and at 99% for fricatives. The practical utility of the transformation component is evaluated in its effect on recognition rates, as described below. 6.2 Recognition with TD-ASR With the performance of the components of TDASR better understood, we combine these and study the resulting composite TD-ASR system. 0 0.2 0.4 0.6 0.8 1 8 8.5 9 9.5 10 α WER (%) TORGO MOCHA Figure 6: Word-error-rate according to varying α, for both TORGO and MOCHA data. Figure 6 shows the WER as a function of α with TD-ASR and N = 4 hypotheses per utterance. The effect of α is clearly non-monotonic, with articulatory information clearly proving useful. Although systems whose rankings are weighted solely by the articulatory component perform better than the exclusively acoustic systems, the lists available to the former are procured from standard acoustic ASR. Interestingly, the gap between systems trained to the two databases increases as α approaches 1.0. Although this gap is not significant, it may be the result of increased inter-speaker articulatory variation in the TORGO database, which includes more than twice as many speakers as MOCHA. Figure 7 shows the WER obtained with TDASR given varying-length N-best lists and α = 0.7. TD-ASR accuracy at N = 4 is significantly better than both TD-ASR at N = 2 and the baseline approaches of table 1 at the 95% confidence level. However, for N > 4 there is a noticeable and systematic worsening of performance. 66 2 3 4 5 6 7 8 8.2 8.4 8.6 8.8 9 9.2 9.4 9.6 9.8 Length of N−best list WER (%) TORGO MOCHA Figure 7: Word-error-rate according to varying lengths of N-best hypotheses used, for both TORGO and MOCHA data. The optimal parameterization of the TD-ASR model results in an average word-error-rate of 8.43%, which represents a 10.3% relative error reduction over the best parameterization of our baseline models. The SKF model of section 4 differs from the HMM and DBN-A baseline models only in its use of continuous (rather than discrete) hidden dynamics and in its articulatory observations. However, its performance is far more variable, and less conclusive. On the MOCHA database the SKF model had an average of 9.54% WER with a standard deviation of 0.73 over 5 trials, and an average of 9.04% WER with a standard deviation of 0.64 over 5 trials on the TORGO database. Despite the presupposed utility of direct articulatory observations, the SKF system does not perform significantly better than the best DBN-A model. Finally, the experiments of tables 6 and 7 are repeated with the canonical tract variables passed untransformed to the probability maps generated by the MDNs. Predictably, resulting articulatory likelihoods LΛ are less representative and increasing their contribution α to the hypothesis reranking does not improve TD-ASR performance significantly, and in some instances worsens it. Although TADA is a useful prescriptive model of generic articulation, its use must be tempered with knowledge of inter-speaker variability. 7 Discussion and conclusions The articulatory medium of speech rarely informs modern speech recognition. We have demonstrated that the use of direct articulatory knowledge can substantially reduce phoneme and word errors in speech recognition, especially if that knowledge is motivated by high-level abstractions of vocal tract behaviour. Task dynamic theory provides a coherent and biologically plausible model of speech production with consequences for phonology (Browman and Goldstein, 1986), neurolinguistics (Guenther and Perkell, 2004), and the evolution of speech and language (Goldstein et al., 2006). We have shown that it is also useful within speech recognition. We have overcome a conceptual impediment in integrating task dynamics and ASR, which is the former’s deterministic nature. This integration is accomplished by stochastically transforming predicted articulatory dynamics and by calculating the likelihoods of these dynamics according to speaker data. However, there are several new avenues for exploration. For example, task dynamics lends itself to more general applications of control theory, including automated self-correction, rhythm, co-ordination, and segmentation (Friedland, 2005). Other high-level questions also remain, such as whether discrete gestures are the correct biological and practical paradigm, whether a purely continuous representation would be more appropriate, and whether this approach generalizes to other languages. In general, our experiments have revealed very little difference between the use of MOCHA and TORGO EMA data. An ad hoc analysis of some of the errors produced by the TD-ASR system found no particular difference between how systems trained to each of these databases recognized nasal phonemes, although only those trained with MOCHA considered velum motion. Other errors common to both sources of data include phoneme insertion errors, normally vowels, which appear to co-occur with some spurious motion of the tongue between segments, especially for longer N-best lists. Despite the relative slow motion of the articulators relative to acoustics, there remains some intermittent noise. As more articulatory data becomes available and as theories of speech production become more refined, we expect that their combined value to speech recognition will become indispensable. Acknowledgments This research is funded by the Natural Sciences and Engineering Research Council and the University of Toronto. 67 References Catherine P. Browman and Louis M. Goldstein. 1986. Towards an articulatory phonology. Phonology Yearbook, 3:219–252. Alessandro D’Ausilio, Friedemann Pulvermuller, Paola Salmas, Ilaria Bufalari, Chiara Begliomini, and Luciano Fadiga. 2009. The motor somatotopy of speech perception. Current Biology, 19(5):381–385, February. Jianping Deng, M. Bouchard, and Tet Yeap. 2005. Speech Enhancement Using a Switching Kalman Filter with a Perceptual Post-Filter. In Acoustics, Speech, and Signal Processing, 2005. Proceedings. (ICASSP ’05). IEEE International Conference on, volume 1, pages 1121–1124, 18-23,. Bernard Friedland. 2005. Control System Design: An Introduction to State-Space Methods. Dover. Zoubin Ghahramani. 1998. Learning dynamic Bayesian networks. In Adaptive Processing of Sequences and Data Structures, pages 168–197. Springer-Verlag. Louis M. Goldstein and Carol Fowler. 2003. Articulatory phonology: a phonology for public language use. Phonetics and Phonology in Language Comprehension and Production: Differences and Similarities. Louis Goldstein, Dani Byrd, and Elliot Saltzman. 2006. The role of vocal tract gestural action units in understanding the evolution of phonology. In M.A. Arib, editor, Action to Language via the Mirror Neuron System, pages 215– 249. Cambridge University Press, Cambridge, UK. Frank H. Guenther and Joseph S. Perkell. 2004. A neural model of speech production and its application to studies of the role of auditory feedback in speech. In Ben Maassen, Raymond Kent, Herman Peters, Pascal Van Lieshout, and Wouter Hulstijn, editors, Speech Motor Control in Normal and Disordered Speech, chapter 4, pages 29–49. Oxford University Press, Oxford. William J. Hardcastle and Nigel Hewlett, editors. 1999. Coarticulation – Theory, Data, and Techniques. Cambridge University Press. Mark Hasegawa-Johnson and Margaret Fleck. 2007. International Speech Lexicon Project. John Hogden and Patrick Valdez. 2001. A stochastic articulatory-to-acoustic mapping as a basis for speech recognition. In Proceedings of the 18th IEEE Instrumentation and Measurement Technology Conference, 2001. IMTC 2001, volume 2, pages 1105–1110 vol.2. Katrin Kirchhoff. 1999. Robust Speech Recognition Using Articulatory Information. Ph.D. thesis, University of Bielefeld, Germany, July. Alvin M. Liberman and Ignatius G. Mattingly. 1985. The motor theory of speech perception revised. Cognition, 21:1–36. Konstantin Markov, Jianwu Dang, and Satoshi Nakamura. 2006. Integration of articulatory and spectrum features based on the hybrid HMM/BN modeling framework. Speech Communication, 48(2):161–175, February. Kevin Patrick Murphy. 1998. Switching Kalman Filters. Technical report. Kevin Patrick Murphy. 2002. Dynamic Bayesian Networks: Representation, Inference and Learning. Ph.D. thesis, University of California at Berkeley. Hosung Nam and Louis Goldstein. 2006. TADA (TAsk Dynamics Application) manual. Hosung Nam and Elliot Saltzman. 2003. A competitive, coupled oscillator model of syllable structure. In Proceedings of the 15th International Congress of Phonetic Sciences (ICPhS 2003), pages 2253–2256, Barcelona, Spain. Douglas O’Shaughnessy. 2000. Speech Communications – Human and Machine. IEEE Press, New York, NY, USA. Korin Richmond, Simon King, and Paul Taylor. 2003. Modelling the uncertainty in recovering articulation from acoustics. Computer Speech and Language, 17:153–172. Sam T. Roweis. 1999. Data Driven Production Models for Speech Processing. Ph.D. thesis, California Institute of Technology, Pasadena, California. Frank Rudzicz, Pascal van Lieshout, Graeme Hirst, Gerald Penn, Fraser Shein, and Talya Wolff. 2008. Towards a comparative database of dysarthric articulation. In Proceedings of the eighth International Seminar on Speech Production (ISSP’08), Strasbourg France, December. Frank Rudzicz. 2009. Applying discretized articulatory knowledge to dysarthric speech. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP09), Taipei, Taiwan, April. Hiroaki Sakoe and Seibi Chiba. 1978. Dynamic programming algorithm optimization for spoken word recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-26, February. Elliot L. Saltzman and Kevin G. Munhall. 1989. A dynamical approach to gestural patterning in speech production. Ecological Psychology, 1(4):333–382. Elliot M. Saltzman, 1986. Task dynamic co-ordination of the speech articulators: a preliminary model, pages 129–144. Springer-Verlag. Tomoki Toda, Alan W. Black, and Keiichi Tokuda. 2008. Statistical mapping between articulatory movements and acoustic spectrum using a Gaussian mixture model. Speech Communication, 50(3):215–227, March. Alan Wrench. 1999. The MOCHA-TIMIT articulatory database, November. Yana Yunusova, Jordan R. Green, and Antje Mefferd. 2009. Accuracy Assessment for AG500, Electromagnetic Articulograph. Journal of Speech, Language, and Hearing Research, 52:547–555, April. Victor Zue, Stephanie Seneff, and James Glass. 1989. Speech Database Development: TIMIT and Beyond. In Proceedings of ESCA Tutorial and Research Workshop on Speech Input/Output Assessment and Speech Databases (SIOA-1989), volume 2, pages 35–40, Noordwijkerhout, The Netherlands. 68
2010
7
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 678–687, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Taxonomy, Dataset, and Classifier for Automatic Noun Compound Interpretation Stephen Tratz and Eduard Hovy Information Sciences Institute University of Southern California Marina del Rey, CA 90292 {stratz,hovy}@isi.edu Abstract The automatic interpretation of noun-noun compounds is an important subproblem within many natural language processing applications and is an area of increasing interest. The problem is difficult, with disagreement regarding the number and nature of the relations, low inter-annotator agreement, and limited annotated data. In this paper, we present a novel taxonomy of relations that integrates previous relations, the largest publicly-available annotated dataset, and a supervised classification method for automatic noun compound interpretation. 1 Introduction Noun compounds (e.g., ‘maple leaf’) occur very frequently in text, and their interpretation— determining the relationships between adjacent nouns as well as the hierarchical dependency structure of the NP in which they occur—is an important problem within a wide variety of natural language processing (NLP) applications, including machine translation (Baldwin and Tanaka, 2004) and question answering (Ahn et al., 2005). The interpretation of noun compounds is a difficult problem for various reasons (Spärck Jones, 1983). Among them is the fact that no set of relations proposed to date has been accepted as complete and appropriate for general-purpose text. Regardless, automatic noun compound interpretation is the focus of an upcoming SEMEVAL task (Butnariu et al., 2009). Leaving aside the problem of determining the dependency structure among strings of three or more nouns—a problem we do not address in this paper—automatic noun compound interpretation requires a taxonomy of noun-noun relations, an automatic method for accurately assigning the relations to noun compounds, and, in the case of supervised classification, a sufficiently large dataset for training. Earlier work has often suffered from using taxonomies with coarse-grained, highly ambiguous predicates, such as prepositions, as various labels (Lauer, 1995) and/or unimpressive inter-annotator agreement among human judges (Kim and Baldwin, 2005). In addition, the datasets annotated according to these various schemes have often been too small to provide wide coverage of the noun compounds likely to occur in general text. In this paper, we present a large, fine-grained taxonomy of 43 noun compound relations, a dataset annotated according to this taxonomy, and a supervised, automatic classification method for determining the relation between the head and modifier words in a noun compound. We compare and map our relations to those in other taxonomies and report the promising results of an inter-annotator agreement study as well as an automatic classification experiment. We examine the various features used for classification and identify one very useful, novel family of features. Our dataset is, to the best of our knowledge, the largest noun compound dataset yet produced. We will make it available via http://www.isi.edu. 2 Related Work 2.1 Taxonomies The relations between the component nouns in noun compounds have been the subject of various linguistic studies performed throughout the years, including early work by Jespersen (1949). The taxonomies they created are varied. Lees created an early taxonomy based primarily upon grammar (Lees, 1960). Levi’s influential work postulated that complex nominals (Levi’s name for noun compounds that also permits certain adjectival modifiers) are all derived either via nominalization or 678 by deleting one of nine predicates (i.e., CAUSE, HAVE, MAKE, USE, BE, IN, FOR, FROM, ABOUT) from an underlying sentence construction (Levi, 1978). Of the taxonomies presented by purely linguistic studies, our categories are most similar to those proposed by Warren (1978), whose categories (e.g., MATERIAL+ARTEFACT, OBJ+PART) are generally less ambiguous than Levi’s. In contrast to studies that claim the existence of a relatively small number of semantic relations, Downing (1977) presents a strong case for the existence of an unbounded number of relations. While we agree with Downing’s belief that the number of relations is unbounded, we contend that the vast majority of noun compounds fits within a relatively small set of categories. The relations used in computational linguistics vary much along the same lines as those proposed earlier by linguists. Several lines of work (Finin, 1980; Butnariu and Veale, 2008; Nakov, 2008) assume the existence of an unbounded number of relations. Others use categories similar to Levi’s, such as Lauer’s (1995) set of prepositional paraphrases (i.e., OF, FOR, IN, ON, AT, FROM, WITH, ABOUT) to analyze noun compounds. Some work (e.g., Barker and Szpakowicz, 1998; Nastase and Szpakowicz, 2003; Girju et al., 2005; Kim and Baldwin, 2005) use sets of categories that are somewhat more similar to those proposed by Warren (1978). While most of the noun compound research to date is not domain specific, Rosario and Hearst (2001) create and experiment with a taxonomy tailored to biomedical text. 2.2 Classification The approaches used for automatic classification are also varied. Vanderwende (1994) presents one of the first systems for automatic classification, which extracted information from online sources and used a series of rules to rank a set of most likely interpretations. Lauer (1995) uses corpus statistics to select a prepositional paraphrase. Several lines of work, including that of Barker and Szpakowicz (1998), use memory-based methods. Kim and Baldwin (2005) and Turney (2006) use nearest neighbor approaches based upon WordNet (Fellbaum, 1998) and Turney’s Latent Relational Analysis, respectively. Rosario and Hearst (2001) utilize neural networks to classify compounds according to their domain-specific relation taxonomy. Moldovan et al. (2004) use SVMs as well as a novel algorithm (i.e., semantic scattering). Nastase et al. (2006) experiment with a variety of classification methods including memory-based methods, SVMs, and decision trees. Ó Séaghdha and Copestake (2009) use SVMs and experiment with kernel methods on a dataset labeled using a relatively small taxonomy. Girju (2009) uses crosslinguistic information from parallel corpora to aid classification. 3 Taxonomy 3.1 Creation Given the heterogeneity of past work, we decided to start fresh and build a new taxonomy of relations using naturally occurring noun pairs, and then compare the result to earlier relation sets. We collected 17509 noun pairs and over a period of 10 months assigned one or more relations to each, gradually building and refining our taxonomy. More details regarding the dataset are provided in Section 4. The relations we produced were then compared to those present in other taxonomies (e.g., Levi, 1978; Warren, 1978; Barker and Szpakowicz, 1998; Girju et al., 2005), and they were found to be fairly similar. We present a detailed comparison in Section 3.4. We tested the relation set with an initial inter-annotator agreement study (our latest interannotator agreement study results are presented in Section 6). However, the mediocre results indicated that the categories and/or their definitions needed refinement. We then embarked on a series of changes, testing each generation by annotation using Amazon’s Mechanical Turk service, a relatively quick and inexpensive online platform where requesters may publish tasks for anonymous online workers (Turkers) to perform. Mechanical Turk has been previously used in a variety of NLP research, including recent work on noun compounds by Nakov (2008) to collect short phrases for linking the nouns within noun compounds. For the Mechanical Turk annotation tests, we created five sets of 100 noun compounds from noun compounds automatically extracted from a random subset of New York Times articles written between 1987 and 2007 (Sandhaus, 2008). Each of these sets was used in a separate annotation round. For each round, a set of 100 noun compounds was uploaded along with category defini679 Category Name % Example Approximate Mappings Causal Group COMMUNICATOR OF COMMUNICATION 0.77 court order ⊃BGN:Agent, ⊃L:Acta+Producta, ⊃V:Subj PERFORMER OF ACT/ACTIVITY 2.07 police abuse ⊃BGN:Agent, ⊃L:Acta+Producta, ⊃V:Subj CREATOR/PROVIDER/CAUSE OF 2.55 ad revenue ⊂BGV:Cause(d-by), ⊂L:Cause2, ⊂N:Effect Purpose/Activity Group PERFORM/ENGAGE_IN 13.24 cooking pot ⊃BGV:Purpose, ⊃L:For, ≈N:Purpose, ⊃W:Activity∪Purpose CREATE/PROVIDE/SELL 8.94 nicotine patch ∞BV:Purpose, ⊂BG:Result, ∞G:Make-Produce, ⊂GNV:Cause(s), ∞L:Cause1∪Make1∪For, ⊂N:Product, ⊃W:Activity∪Purpose OBTAIN/ACCESS/SEEK 1.50 shrimp boat ⊃BGNV:Purpose, ⊃L:For, ⊃W:Activity∪Purpose MODIFY/PROCESS/CHANGE 1.50 eye surgery ⊃BGNV:Purpose, ⊃L:For, ⊃W:Activity∪Purpose MITIGATE/OPPOSE/DESTROY 2.34 flak jacket ⊃BGV:Purpose, ⊃L:For, ≈N:Detraction, ⊃W:Activity∪Purpose ORGANIZE/SUPERVISE/AUTHORITY 4.82 ethics board ⊃BGNV:Purpose/Topic, ⊃L:For/Abouta, ⊃W:Activity PROPEL 0.16 water gun ⊃BGNV:Purpose, ⊃L:For, ⊃W:Activity∪Purpose PROTECT/CONSERVE 0.25 screen saver ⊃BGNV:Purpose, ⊃L:For, ⊃W:Activity∪Purpose TRANSPORT/TRANSFER/TRADE 1.92 freight train ⊃BGNV:Purpose, ⊃L:For, ⊃W:Activity∪Purpose TRAVERSE/VISIT 0.11 tree traversal ⊃BGNV:Purpose, ⊃L:For, ⊃W:Activity∪Purpose Ownership, Experience, Employment, and Use POSSESSOR + OWNED/POSSESSED 2.11 family estate ⊃BGNVW:Possess*, ⊃L:Have2 EXPERIENCER + COGINITION/MENTAL 0.45 voter concern ⊃BNVW:Possess*, ≈G:Experiencer, ⊃L:Have2 EMPLOYER + EMPLOYEE/VOLUNTEER 2.72 team doctor ⊃BGNVW:Possess*, ⊃L:For/Have2, ⊃BGN:Beneficiary CONSUMER + CONSUMED 0.09 cat food ⊃BGNVW:Purpose, ⊃L:For, ⊃BGN:Beneficiary USER/RECIPIENT + USED/RECEIVED 1.02 voter guide ⊃BNVW:Purpose, ⊃G:Recipient, ⊃L:For, ⊃BGN:Beneficiary OWNED/POSSESSED + POSSESSION 1.20 store owner ≈G:Possession, ⊃L:Have1, ≈W:Belonging-Possessor EXPERIENCE + EXPERIENCER 0.27 fire victim ≈G:Experiencer, ∞L:Have1 THING CONSUMED + CONSUMER 0.41 fruit fly ⊃W:Obj-SingleBeing THING/MEANS USED + USER 1.96 faith healer ≈BNV:Instrument, ≈G:Means∪Instrument, ≈L:Use, ⊂W:MotivePower-Obj Temporal Group TIME [SPAN] + X 2.35 night work ≈BNV:Time(At), ⊃G:Temporal, ≈L:Inc, ≈W:Time-Obj X + TIME [SPAN] 0.50 birth date ⊃G:Temporal, ≈W:Obj-Time Location and Whole+Part/Member of LOCATION/GEOGRAPHIC SCOPE OF X 4.99 hillside home ≈BGV:Locat(ion/ive), ≈L:Ina∪Fromb, B:Source, ≈N:Location(At/From), ≈W:Place-Obj∪PlaceOfOrigin WHOLE + PART/MEMBER OF 1.75 robot arm ⊃B:Possess*, ≈G:Part-Whole, ⊃L:Have2, ≈N:Part, ≈V:Whole-Part, ≈W:Obj-Part∪Group-Member Composition and Containment Group SUBSTANCE/MATERIAL/INGREDIENT + WHOLE 2.42 plastic bag ⊂BNVW:Material*, ∞GN:Source, ∞L:Froma, ≈L:Have1, ∞L:Make2b, ∞N:Content PART/MEMBER + COLLECTION/CONFIG/SERIES 1.78 truck convoy ≈L:Make2ac, ≈N:Whole, ≈V:Part-Whole, ≈W:Parts-Whole X + SPATIAL CONTAINER/LOCATION/BOUNDS 1.39 shoe box ⊃B:Content∪Located, ⊃L:For, ⊃L:Have1, ≈N:Location, ≈W:Obj-Place Topic Group TOPIC OF COMMUNICATION/IMAGERY/INFO 8.37 travel story ⊃BGNV:Topic, ⊃L:Aboutab, ⊃W:SubjectMatter, ⊂G:Depiction TOPIC OF PLAN/DEAL/ARRANGEMENT/RULES 4.11 loan terms ⊃BGNV:Topic, ⊃L:Abouta, ⊃W:SubjectMatter TOPIC OF OBSERVATION/STUDY/EVALUATION 1.71 job survey ⊃BGNV:Topic, ⊃L:Abouta, ⊃W:SubjectMatter TOPIC OF COGNITION/EMOTION 0.58 jazz fan ⊃BGNV:Topic, ⊃L:Abouta, ⊃W:SubjectMatter TOPIC OF EXPERT 0.57 policy wonk ⊃BGNV:Topic, ⊃L:Abouta, ⊃W:SubjectMatter TOPIC OF SITUATION 1.64 oil glut ⊃BGNV:Topic, ≈L:Aboutc TOPIC OF EVENT/PROCESS 1.09 lava flow ⊃G:Theme, ⊃V:Subj Attribute Group TOPIC/THING + ATTRIB 4.13 street name ⊃BNV:Possess*, ≈G:Property, ⊃L:Have2, ≈W:Obj-Quality TOPIC/THING + ATTRIB VALUE CHARAC OF 0.31 earth tone Attributive and Coreferential COREFERENTIAL 4.51 fighter plane ≈BV:Equative, ⊃G:Type∪IS-A, ≈L:BEbcd, ≈N:Type∪Equality, ≈W:Copula PARTIAL ATTRIBUTE TRANSFER 0.69 skeleton crew ≈W:Resemblance, ⊃G:Type MEASURE + WHOLE 4.37 hour meeting ≈G:Measure, ⊂N:TimeThrough∪Measure, ≈W:Size-Whole Other HIGHLY LEXICALIZED / FIXED PAIR 0.65 pig iron OTHER 1.67 contact lens Table 1: The semantic relations, their frequency in the dataset, examples, and approximate relation mappings to previous relation sets. ≈-approximately equivalent; ⊃/⊂-super/sub set; ∞-some overlap; ∪-union; initials BGLNVW refer respectively to the works of (Barker and Szpakowicz, 1998; Girju et al., 2005; Girju, 2007; Levi, 1978; Nastase and Szpakowicz, 2003; Vanderwende, 1994; Warren, 1978). 680 tions and examples. Turkers were asked to select one or, if they deemed it appropriate, two categories for each noun pair. After all annotations for the round were completed, they were examined, and any taxonomic changes deemed appropriate (e.g., the creation, deletion, and/or modification of categories) were incorporated into the taxonomy before the next set of 100 was uploaded. The categories were substantially modified during this process. They are shown in Table 1 along with examples and an approximate mapping to several other taxonomies. 3.2 Category Descriptions Our categories are defined with sentences. For example, the SUBSTANCE category has the definition n1 is one of the primary physical substances/materials/ingredients that n2 is made/composed out of/from. Our LOCATION category’s definition reads n1 is the location / geographic scope where n2 is at, near, from, generally found, or occurs. Defining the categories with sentences is advantageous because it is possible to create straightforward, explicit defintions that humans can easily test examples against. 3.3 Taxonomy Groupings In addition to influencing the category definitions, some taxonomy groupings were altered with the hope that this would improve inter-annotator agreement for cases where Turker disagreement was systematic. For example, LOCATION and WHOLE + PART/MEMBER OF were commonly disagreed upon by Turkers so they were placed within their own taxonomic subgroup. The ambiguity between these categories has previously been observed by Girju (2009). Turkers also tended to disagree between the categories related to composition and containment. Due this apparent similarity they were also grouped together in the taxonomy. The ATTRIBUTE categories are positioned near the TOPIC group because some Turkers chose a TOPIC category when an ATTRIBUTE category was deemed more appropriate. This may be because attributes are relatively abstract concepts that are often somewhat descriptive of whatever possesses them. A prime example of this is street name. 3.4 Contrast with other Taxonomies In order to ensure completeness, we mapped into our taxonomy the relations proposed in most previous work including those of Barker and Szpakowicz (1998) and Girju et al. (2005). The results, shown in Table 1, demonstrate that our taxonomy is similar to several taxonomies used in other work. However, there are three main differences and several less important ones. The first major difference is the absence of a significant THEME or OBJECT category. The second main difference is that our taxonomy does not include a PURPOSE category and, instead, has several smaller categories. Finally, instead of possessing a single TOPIC category, our taxonomy has several, finer-grained TOPIC categories. These differences are significant because THEME/OBJECT, PURPOSE, and TOPIC are typically among the most frequent categories. THEME/OBJECT is typically the category to which other researchers assign noun compounds whose head noun is a nominalized verb and whose modifier noun is the THEME/OBJECT of the verb. This is typically done with the justification that the relation/predicate (the root verb of the nominalization) is overtly expressed. While including a THEME/OBJECT category has the advantage of simplicity, its disadvantages are significant. This category leads to a significant ambiguity in examples because many compounds fitting the THEME/OBJECT category also match some other category as well. Warren (1978) gives the examples of soup pot and soup container to illustrate this issue, and Girju (2009) notes a substantial overlap between THEME and MAKEPRODUCE. Our results from Mechanical Turk showed significant overlap between PURPOSE and OBJECT categories (present in an earlier version of the taxonomy). For this reason, we do not include a separate THEME/OBJECT category. If it is important to know whether the modifier also holds a THEME/OBJECT relationship, we suggest treating this as a separate classification task. The absence of a single PURPOSE category is another distinguishing characteristic of our taxonomy. Instead, the taxonomy includes a number of finer-grained categories (e.g., PERFORM/ENGAGE_IN), which can be conflated to create a PURPOSE category if necessary. During our Mechanical Turk-based refinement process, our now-defunct PURPOSE category was found to be ambiguous with many other categories as well as difficult to define. This problem has been noted by others. For example, Warren (1978) 681 points out that tea in tea cup qualifies as both the content and the purpose of the cup. Similarly, while WHOLE+PART/MEMBER was selected by most Turkers for bike tire, one individual chose PURPOSE. Our investigation identified five main purpose-like relations that most of our PURPOSE examples can be divided into, including activity performance (PERFORM/ENGAGE_IN), creation/provision (CREATE/PROVIDE/CAUSE OF), obtainment/access (OBTAIN/ACCESS/SEEK), supervision/management (ORGANIZE/SUPERVISE/AUTHORITY), and opposition (MITIGATE/OPPOSE/DESTROY). The third major distinguishing different between our taxonomy and others is the absence of a single TOPIC/ABOUT relation. Instead, our taxonomy has several finer-grained categories that can be conflated into a TOPIC category. Unlike the previous two distinguishing characteristics, which were motivated primarily by Turker annotations, this separation was largely motivated by author dissatisfaction with a single TOPIC category. Two differentiating characteristics of less importance are the absence of BENEFICIARY or SOURCE categories (Barker and Szpakowicz, 1998; Nastase and Szpakowicz, 2003; Girju et al., 2005). Our EMPLOYER, CONSUMER, and USER/RECIPIENT categories combined more or less cover BENEFICIARY. Since SOURCE is ambiguous in multiple ways including causation (tsunami injury), provision (government grant), ingredients (rice wine), and locations (north wind), we chose to exclude it. 4 Dataset Our noun compound dataset was created from two principal sources: an in-house collection of terms extracted from a large corpus using partof-speech tagging and mutual information and the Wall Street Journal section of the Penn Treebank. Compounds including one or more proper nouns were ignored. In total, the dataset contains 17509 unique, out-of-context examples, making it by far the largest hand-annotated compound noun dataset in existence that we are aware of. Proper nouns were not included. The next largest available datasets have a variety of drawbacks for noun compound interpretation in general text. Kim and Baldwin’s (2005) dataset is the second largest available dataset, but inter-annotator agreement was only 52.3%, and the annotations had an usually lopsided distribution; 42% of the data has TOPIC labels. Most (73.23%) of Girju’s (2007) dataset consists of noun-preposition-noun constructions. Rosario and Heart’s (2001) dataset is specific to the biomedical domain, while Ó Séaghdha and Copestake’s (2009) data is labeled with only 5 extremely coarse-grained categories. The remaining datasets are too small to provide wide coverage. See Table 2 below for size comparison with other publicly available, semantically annotated datasets. Size Work 17509 Tratz and Hovy, 2010 2169 Kim and Baldwin, 2005 2031 Girju, 2007 1660 Rosario and Hearst, 2001 1443 Ó Séaghdha and Copestake, 2007 505 Barker and Szpakowicz, 1998 600 Nastase and Szpakowicz, 2003 395 Vanderwende, 1994 385 Lauer, 1995 Table 2: Size of various available noun compound datasets labeled with relation annotations. Italics indicate that the dataset contains n-prep-n constructions and/or non-nouns. 5 Automated Classification We use a Maximum Entropy (Berger et al., 1996) classifier with a large number of boolean features, some of which are novel (e.g., the inclusion of words from WordNet definitions). Maximum Entropy classifiers have been effective on a variety of NLP problems including preposition sense disambiguation (Ye and Baldwin, 2007), which is somewhat similar to noun compound interpretation. We use the implementation provided in the MALLET machine learning toolkit (McCallum, 2002). 5.1 Features Used WordNet-based Features • {Synonyms, Hypernyms} for all NN and VB entries for each word • Intersection of the words’ hypernyms • All terms from the ‘gloss’ for each word • Intersection of the words’ ‘gloss’ terms • Lexicographer file names for each word’s NN and VB entries (e.g., n1:substance) 682 • Logical AND of lexicographer file names for the two words (e.g., n1:substance ∧ n2:artifact) • Lists of all link types (e.g., meronym links) associated with each word • Logical AND of the link types (e.g., n1:hasMeronym(s) ∧n2:hasHolonym(s)) • Part-of-speech (POS) indicators for the existence of VB, ADJ, and ADV entries for each of the nouns • Logical AND of the POS indicators for the two words • ‘Lexicalized’ indicator for the existence of an entry for the compound as a single term • Indicators if either word is a part of the other word according to Part-Of links • Indicators if either word is a hypernym of the other • Indicators if either word is in the definition of the other Roget’s Thesaurus-based Features • Roget’s divisions for all noun (and verb) entries for each word • Roget’s divisions shared by the two words Surface-level Features • Indicators for the suffix types (e.g., deadjectival, de-nominal [non]agentive, deverbal [non]agentive) • Indicators for degree, number, order, or locative prefixes (e.g., ultra-, poly-, post-, and inter-, respectively) • Indicators for whether or not a preposition occurs within either term (e.g., ‘down’ in ‘breakdown’) • The last {two, three} letters of each word Web 1T N-gram Features To provide information related to term usage to the classifier, we extracted trigram and 4-gram features from the Web 1T Corpus (Brants and Franz, 2006), a large collection of n-grams and their counts created from approximately one trillion words of Web text. Only n-grams containing lowercase words were used. 5-grams were not used due to memory limitations. Only n-grams containing both terms (including plural forms) were extracted. Table 3 describes the extracted n-gram features. 5.2 Cross Validation Experiments We performed 10-fold cross validation on our dataset, and, for the purpose of comparison, we also performed 5-fold cross validation on Ó Séaghdha’s (2007) dataset using his folds. Our classification accuracy results are 79.3% on our data and 63.6% on the Ó Séaghdha data. We used the χ2 measure to limit our experiments to the most useful 35000 features, which is the point where we obtain the highest results on Ó Séaghdha’s data. The 63.6% figure is similar to the best previously reported accuracy for this dataset of 63.1%, which was obtained by Ó Séaghdha and Copestake (2009) using kernel methods. For comparison with SVMs, we used Thorsten Joachims’ SVMmulticlass, which implements an optimization solution to Cramer and Singer’s (2001) multiclass SVM formulation. The best results were similar, with 79.4% on our dataset and 63.1% on Ó Séaghdha’s. SVMmulticlass was, however, observed to be very sensitive to the tuning of the C parameter, which determines the tradeoff between training error and margin width. The best results for the datasets were produced with C set to 5000 and 375 respectively. Trigram Feature Extraction Patterns text <n1> <n2> <*> <n1> <n2> <n1> <n2> text <n1> <n2> <*> <n1> text <n2> <n2> text <n1> <n1> <*> <n2> <n2> <*> <n1> 4-Gram Feature Extraction Patterns <n1> <n2> text text <n1> <n2> <*> text text <n1> <n2> text text text <n1> <n2> text <*> <n1> <n2> <n1> text text <n2> <n1> text <*> <n2> <n1> <*> text <n2> <n1> <*> <*> <n2> <n2> text text <n1> <n2> text <*> <n1> <n2> <*> text <n1> <n2> <*> <*> <n1> Table 3: Patterns for extracting trigram and 4Gram features from the Web 1T Corpus for a given noun compound (n1 n2). To assess the impact of the various features, we ran the cross validation experiments for each feature type, alternating between including only one 683 feature type and including all feature types except that one. The results for these runs using the Maximum Entropy classifier are presented in Table 4. There are several points of interest in these results. The WordNet gloss terms had a surprisingly strong influence. In fact, by themselves they proved roughly as useful as the hypernym features, and their removal had the single strongest negative impact on accuracy for our dataset. As far as we know, this is the first time that WordNet definition words have been used as features for noun compound interpretation. In the future, it may be valuable to add definition words from other machinereadable dictionaries. The influence of the Web 1T n-gram features was somewhat mixed. They had a positive impact on the Ó Séaghdha data, but their affect upon our dataset was limited and mixed, with the removal of the 4-gram features actually improving performance slightly. Our Data Ó Séaghdha Data 1 M-1 1 M-1 WordNet-based synonyms 0.674 0.793 0.469 0.626 hypernyms 0.753 0.787 0.539 0.626 hypernyms∩ 0.250 0.791 0.357 0.624 gloss terms 0.741 0.785 0.510 0.613 gloss terms∩ 0.226 0.793 0.275 0.632 lexfnames 0.583 0.792 0.505 0.629 lexfnames∧ 0.480 0.790 0.440 0.629 linktypes 0.328 0.793 0.365 0.631 linktypes∧ 0.277 0.792 0.346 0.626 pos 0.146 0.793 0.239 0.633 pos∧ 0.146 0.793 0.235 0.632 part-of terms 0.372 0.793 0.368 0.635 lexicalized 0.132 0.793 0.213 0.637 part of other 0.132 0.793 0.216 0.636 gloss of other 0.133 0.793 0.214 0.635 hypernym of other 0.132 0.793 0.227 0.627 Roget’s Thesaurus-based div info 0.679 0.789 0.471 0.629 div info∩ 0.173 0.793 0.283 0.633 Surface level affixes 0.200 0.793 0.274 0.637 affixes∧ 0.201 0.792 0.272 0.635 last letters 0.481 0.792 0.396 0.634 prepositions 0.136 0.793 0.222 0.635 Web 1T-based trigrams 0.571 0.790 0.437 0.615 4-grams 0.558 0.797 0.442 0.604 Table 4: Impact of features; cross validation accuracy for only one feature type and all but one feature type experiments, denoted by 1 and M-1 respectively. ∩–features shared by both n1 and n2; ∧–n1 and n2 features conjoined by logical AND (e.g., n1 is a ‘substance’ ∧n2 is a ‘artifact’) 6 Evaluation 6.1 Evaluation Data To assess the quality of our taxonomy and classification method, we performed an inter-annotator agreement study using 150 noun compounds extracted from a random subset of articles taken from New York Times articles dating back to 1987 (Sandhaus, 2008). The terms were selected based upon their frequency (i.e., a compound occurring twice as often as another is twice as likely to be selected) to label for testing purposes. Using a heuristic similar to that used by Lauer (1995), we only extracted binary noun compounds not part of a larger sequence. Before reaching the 150 mark, we discarded 94 of the drawn examples because they were included in the training set. Thus, our training set covers roughly 38.5% of the binary noun compound instances in recent New York Times articles. 6.2 Annotators Due to the relatively high speed and low cost of Amazon’s Mechanical Turk service, we chose to use Mechanical Turkers as our annotators. Using Mechanical Turk to obtain interannotator agreement figures has several drawbacks. The first and most significant drawback is that it is impossible to force each Turker to label every data point without putting all the terms onto a single web page, which is highly impractical for a large taxonomy. Some Turkers may label every compound, but most do not. Second, while we requested that Turkers only work on our task if English was their first language, we had no method of enforcing this. Third, Turker annotation quality varies considerably. 6.3 Combining Annotators To overcome the shortfalls of using Turkers for an inter-annotator agreement study, we chose to request ten annotations per noun compound and then combine the annotations into a single set of selections using a weighted voting scheme. To combine the results, we calculated a “quality” score for each Turker based upon how often he/she agreed with the others. This score was computed as the average percentage of other Turkers who agreed with his/her annotations. The score for each label for a particular compound was then computed as the sum of the Turker quality scores of the Turkers 684 who annotated the compound. Finally, the label with the highest rating was selected. 6.4 Inter-annotator Agreement Results The raw agreement scores along with Cohen’s κ (Cohen, 1960), a measure of inter-annotator agreement that discounts random chance, were calculated against the authors’ labeling of the data for each Turker, the weighted-voting annotation set, and the automatic classification output. These statistics are reported in Table 5 along with the individual Turker “quality” scores. The 54 Turkers who made fewer than 3 annotations were excluded from the calculations under the assumption that they were not dedicated to the task, leaving a total of 49 Turkers. Due to space limitations, only results for Turkers who annotated 15 or more instances are included in Table 5. We recomputed the κ statistics after conflating the category groups in two different ways. The first variation involved conflating all the TOPIC categories into a single topic category, resulting in a total of 37 categories (denoted by κ* in Table 5). For the second variation, in addition to conflating the TOPIC categories, we conflated the ATTRIBUTE categories into a single category and the PURPOSE/ACTIVITY categories into a single category, for a total of 27 categories (denoted by κ** in Table 5). 6.5 Results Discussion The .57-.67 κ figures achieved by the Voted annotations compare well with previously reported inter-annotator agreement figures for noun compounds using fine-grained taxonomies. Kim and Baldwin (2005) report an agreement of 52.31% (not κ) for their dataset using Barker and Szpakowicz’s (1998) 20 semantic relations. Girju et al. (2005) report .58 κ using a set of 35 semantic relations, only 21 of which were used, and a .80 κ score using Lauer’s 8 prepositional paraphrases. Girju (2007) reports .61 κ agreement using a similar set of 22 semantic relations for noun compound annotation in which the annotators are shown translations of the compound in foreign languages. Ó Séaghdha (2007) reports a .68 κ for a relatively small set of relations (BE, HAVE, IN, INST, ACTOR, ABOUT) after removing compounds with non-specific associations or high lexicalization. The correlation between our automatic “quality” scores for the Turkers who performed at Id N Weight Agree κ κ* κ** 1 23 0.45 0.70 0.67 0.67 0.74 2 34 0.46 0.68 0.65 0.65 0.72 3 35 0.34 0.63 0.60 0.61 0.61 4 24 0.46 0.63 0.59 0.68 0.76 5 16 0.58 0.63 0.59 0.59 0.54 Voted 150 NA 0.59 0.57 0.61 0.67 6 52 0.45 0.58 0.54 0.60 0.60 7 38 0.35 0.55 0.52 0.54 0.56 8 149 0.36 0.52 0.49 0.53 0.58 Auto 150 NA 0.51 0.47 0.47 0.45 9 88 0.38 0.48 0.45 0.49 0.59 10 36 0.42 0.47 0.43 0.48 0.52 11 104 0.29 0.46 0.43 0.48 0.52 12 38 0.33 0.45 0.40 0.46 0.47 13 66 0.31 0.42 0.39 0.39 0.49 14 15 0.27 0.40 0.34 0.31 0.29 15 62 0.23 0.34 0.29 0.35 0.38 16 150 0.23 0.30 0.26 0.26 0.30 17 19 0.24 0.26 0.21 0.17 0.14 18 144 0.21 0.25 0.20 0.22 0.22 19 29 0.18 0.21 0.14 0.17 0.31 20 22 0.18 0.18 0.12 0.10 0.16 21 51 0.19 0.18 0.13 0.20 0.26 22 41 0.02 0.02 0.00 0.00 0.01 Table 5: Annotation results. Id – annotator id; N – number of annotations; Weight – voting weight; Agree – raw agreement versus the author’s annotations; κ – Cohen’s κ agreement; κ* and κ** – Cohen’s κ results after conflating certain categories. Voted – combined annotation set using weighted voting; Auto – automatic classification output. least three annotations and their simple agreement with our annotations was very strong at 0.88. The .51 automatic classification figure is respectable given the larger number of categories in the taxonomy. It is also important to remember that the training set covers a large portion of the two-word noun compound instances in recent New York Times articles, so substantially higher accuracy can be expected on many texts. Interestingly, conflating categories only improved the κ statistics for the Turkers, not the automatic classifier. 7 Conclusion In this paper, we present a novel, fine-grained taxonomy of 43 noun-noun semantic relations, the largest annotated noun compound dataset yet created, and a supervised classification method for automatic noun compound interpretation. We describe our taxonomy and provide mappings to taxonomies used by others. Our interannotator agreement study, which utilized nonexperts, shows good inter-annotator agreement 685 given the difficulty of the task, indicating that our category definitions are relatively straightforward. Our taxonomy provides wide coverage, with only 2.32% of our dataset marked as other/lexicalized and 2.67% of our 150 inter-annotator agreement data marked as such by the combined Turker (Voted) annotation set. We demonstrated the effectiveness of a straightforward, supervised classification approach to noun compound interpretation that uses a large variety of boolean features. We also examined the importance of the different features, noting a novel and very useful set of features—the words comprising the definitions of the individual words. 8 Future Work In the future, we plan to focus on the interpretation of noun compounds with 3 or more nouns, a problem that includes bracketing noun compounds into their dependency structures in addition to nounnoun semantic relation interpretation. Furthermore, we would like to build a system that can handle longer noun phrases, including prepositions and possessives. We would like to experiment with including features from various other lexical resources to determine their usefulness for this problem. Eventually, we would like to expand our data set and relations to cover proper nouns as well. We are hopeful that our current dataset and relation definitions, which will be made available via http://www.isi.edu will be helpful to other researchers doing work regarding text semantics. Acknowledgements Stephen Tratz is supported by a National Defense Science and Engineering Graduate Fellowship. References Ahn, K., J. Bos, J. R. Curran, D. Kor, M. Nissim, and B. Webber. 2005. Question Answering with QED at TREC-2005. In Proc. of TREC-2005. Baldwin, T. & T. Tanaka 2004. Translation by machine of compound nominals: Getting it right. In Proc. of the ACL 2004 Workshop on Multiword Expressions: Integrating Processing. Barker, K. and S. Szpakowicz. 1998. Semi-Automatic Recognition of Noun Modifier Relationships. In Proc. of the 17th International Conference on Computational Linguistics. Berger, A., S. A. Della Pietra, and V. J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics 22:39-71. Brants, T. and A. Franz. 2006. Web 1T 5-gram Corpus Version 1.1. Linguistic Data Consortium. Butnariu, C. and T. Veale. 2008. A concept-centered approach to noun-compound interpretation. In Proc. of 22nd International Conference on Computational Linguistics (COLING 2008). Butnariu, C., S.N. Kim, P. Nakov, D. Ó Séaghdha, S. Szpakowicz, and T. Veale. 2009. SemEval Task 9: The Interpretation of Noun Compounds Using Paraphrasing Verbs and Prepositions. In Proc. of the NAACL HLT Workshop on Semantic Evaluations: Recent Achievements and Future Directions. Cohen, J. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement. 20:1. Crammer, K. and Y. Singer. On the Algorithmic Implementation of Multi-class SVMs In Journal of Machine Learning Research. Downing, P. 1977. On the Creation and Use of English Compound Nouns. Language. 53:4. Fellbaum, C., editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. Finin, T. 1980. The Semantic Interpretation of Compound Nominals. Ph.D dissertation University of Illinois, Urbana, Illinois. Girju, R., D. Moldovan, M. Tatu and D. Antohe. 2005. On the semantics of noun compounds. Computer Speech and Language, 19. Girju, R. 2007. Improving the interpretation of noun phrases with cross-linguistic information. In Proc. of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007). Girju, R. 2009. The Syntax and Semantics of Prepositions in the Task of Automatic Interpretation of Nominal Phrases and Compounds: a Crosslinguistic Study. In Computational Linguistics 35(2) - Special Issue on Prepositions in Application. Jespersen, O. 1949. A Modern English Grammar on Historical Principles. Ejnar Munksgaard. Copenhagen. Kim, S.N. and T. Baldwin. 2007. Interpreting Noun Compounds using Bootstrapping and Sense Collocation. In Proc. of the 10th Conf. of the Pacific Association for Computational Linguistics. Kim, S.N. and T. Baldwin. 2005. Automatic Interpretation of Compound Nouns using WordNet::Similarity. In Proc. of 2nd International Joint Conf. on Natural Language Processing. 686 Lauer, M. 1995. Corpus statistics meet the compound noun. In Proc. of the 33rd Meeting of the Association for Computational Linguistics. Lees, R.B. 1960. The Grammar of English Nominalizations. Indiana University. Bloomington, IN. Levi, J.N. 1978. The Syntax and Semantics of Complex Nominals. Academic Press. New York. McCallum, A. K. MALLET: A Machine Learning for Language Toolkit. http://mallet.cs.umass.edu. 2002. Moldovan, D., A. Badulescu, M. Tatu, D. Antohe, and R. Girju. 2004. Models for the semantic classification of noun phrases. In Proc. of Computational Lexical Semantics Workshop at HLT-NAACL 2004. Nakov, P. and M. Hearst. 2005. Search Engine Statistics Beyond the n-gram: Application to Noun Compound Bracketing. In Proc. the Ninth Conference on Computational Natural Language Learning. Nakov, P. 2008. Noun Compound Interpretation Using Paraphrasing Verbs: Feasibility Study. In Proc. the 13th International Conference on Artificial Intelligence: Methodology, Systems, Applications (AIMSA’08). Nastase V. and S. Szpakowicz. 2003. Exploring nounmodifier semantic relations. In Proc. the 5th International Workshop on Computational Semantics. Nastase, V., J. S. Shirabad, M. Sokolova, and S. Szpakowicz 2006. Learning noun-modifier semantic relations with corpus-based and Wordnet-based features. In Proc. of the 21st National Conference on Artificial Intelligence (AAAI-06). Ó Séaghdha, D. and A. Copestake. 2009. Using lexical and relational similarity to classify semantic relations. In Proc. of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2009). Ó Séaghdha, D. 2007. Annotating and Learning Compound Noun Semantics. In Proc. of the ACL 2007 Student Research Workshop. Rosario, B. and M. Hearst. 2001. Classifying the Semantic Relations in Noun Compounds via DomainSpecific Lexical Hierarchy. In Proc. of 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP-01). Sandhaus, E. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia. Spärck Jones, K. 1983. Compound Noun Interpretation Problems. Computer Speech Processing, eds. F. Fallside and W A. Woods, Prentice-Hall, NJ. Turney, P. D. 2006. Similarity of semantic relations. Computation Linguistics, 32(3):379-416 Vanderwende, L. 1994. Algorithm for Automatic Interpretation of Noun Sequences. In Proc. of COLING-94. Warren, B. 1978. Semantic Patterns of Noun-Noun Compounds. Acta Universitatis Gothobugensis. Ye, P. and T. Baldwin. 2007. MELB-YB: Preposition Sense Disambiguation Using Rich Semantic Features. In Proc. of the 4th International Workshop on Semantic Evaluations (SemEval-2007). 687
2010
70
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 688–697, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Models of Metaphor in NLP Ekaterina Shutova Computer Laboratory University of Cambridge 15 JJ Thomson Avenue Cambridge CB3 0FD, UK [email protected] Abstract Automatic processing of metaphor can be clearly divided into two subtasks: metaphor recognition (distinguishing between literal and metaphorical language in a text) and metaphor interpretation (identifying the intended literal meaning of a metaphorical expression). Both of them have been repeatedly addressed in NLP. This paper is the first comprehensive and systematic review of the existing computational models of metaphor, the issues of metaphor annotation in corpora and the available resources. 1 Introduction Our production and comprehension of language is a multi-layered computational process. Humans carry out high-level semantic tasks effortlessly by subconsciously employing a vast inventory of complex linguistic devices, while simultaneously integrating their background knowledge, to reason about reality. An ideal model of language understanding would also be capable of performing such high-level semantic tasks. However, a great deal of NLP research to date focuses on processing lower-level linguistic information, such as e.g. part-of-speech tagging, discovering syntactic structure of a sentence (parsing), coreference resolution, named entity recognition and many others. Another cohort of researchers set the goal of improving applicationbased statistical inference (e.g. for recognizing textual entailment or automatic summarization). In contrast, there have been fewer attempts to bring the state-of-the-art NLP technologies together to model the way humans use language to frame high-level reasoning processes, such as for example, creative thought. The majority of computational approaches to figurative language still exploit the ideas articulated three decades ago (Wilks, 1978; Lakoff and Johnson, 1980; Fass, 1991) and often rely on taskspecific hand-coded knowledge. However, recent work on lexical semantics and lexical acquisition techniques opens many new avenues for creation of fully automated models for recognition and interpretation of figurative language. In this paper I will focus on the phenomenon of metaphor and describe the most prominent computational approaches to metaphor, as well the issues of resource creation and metaphor annotation. Metaphors arise when one concept is viewed in terms of the properties of the other. In other words it is based on similarity between the concepts. Similarity is a kind of association implying the presence of characteristics in common. Here are some examples of metaphor. (1) Hillary brushed aside the accusations. (2) How can I kill a process? (Martin, 1988) (3) I invested myself fully in this relationship. (4) And then my heart with pleasure fills, And dances with the daffodils.1 In metaphorical expressions seemingly unrelated features of one concept are associated with another concept. In the example (2) the computational process is viewed as something alive and, therefore, its forced termination is associated with the act of killing. Metaphorical expressions represent a great variety, ranging from conventional metaphors, which we reproduce and comprehend every day, e.g. those in (2) and (3), to poetic and largely novel ones, such as (4). The use of metaphor is ubiquitous in natural language text and it is a serious bottleneck in automatic text understanding. 1“I wandered lonely as a cloud”, William Wordsworth, 1804. 688 In order to estimate the frequency of the phenomenon, Shutova (2010) conducted a corpus study on a subset of the British National Corpus (BNC) (Burnard, 2007) representing various genres. They manually annotated metaphorical expressions in this data and found that 241 out of 761 sentences contained a metaphor. Due to such a high frequency of their use, a system capable of recognizing and interpreting metaphorical expressions in unrestricted text would become an invaluable component of any semantics-oriented NLP application. Automatic processing of metaphor can be clearly divided into two subtasks: metaphor recognition (distinguishing between literal and metaphorical language in text) and metaphor interpretation (identifying the intended literal meaning of a metaphorical expression). Both of them have been repeatedly addressed in NLP. 2 Theoretical Background Four different views on metaphor have been broadly discussed in linguistics and philosophy: the comparison view (Gentner, 1983), the interaction view (Black, 1962), (Hesse, 1966), the selectional restrictions violation view (Wilks, 1975; Wilks, 1978) and the conceptual metaphor view (Lakoff and Johnson, 1980)2. All of these approaches share the idea of an interconceptual mapping that underlies the production of metaphorical expressions. In other words, metaphor always involves two concepts or conceptual domains: the target (also called topic or tenor in the linguistics literature) and the source (or vehicle). Consider the examples in (5) and (6). (5) He shot down all of my arguments. (Lakoff and Johnson, 1980) (6) He attacked every weak point in my argument. (Lakoff and Johnson, 1980) According to Lakoff and Johnson (1980), a mapping of a concept of argument to that of war is employed here. The argument, which is the target concept, is viewed in terms of a battle (or a war), the source concept. The existence of such a link allows us to talk about arguments using the war terminology, thus giving rise to a number of metaphors. 2A detailed overview and criticism of these four views can be found in (Tourangeau and Sternberg, 1982). However, Lakoff and Johnson do not discuss how metaphors can be recognized in the linguistic data, which is the primary task in the automatic processing of metaphor. Although humans are highly capable of producing and comprehending metaphorical expressions, the task of distinguishing between literal and non-literal meanings and, therefore, identifying metaphor in text appears to be challenging. This is due to the variation in its use and external form, as well as a not clear-cut semantic distinction. Gibbs (1984) suggests that literal and figurative meanings are situated at the ends of a single continuum, along which metaphoricity and idiomaticity are spread. This makes demarcation of metaphorical and literal language fuzzy. So far, the most influential account of metaphor recognition is that of Wilks (1978). According to Wilks, metaphors represent a violation of selectional restrictions in a given context. Selectional restrictions are the semantic constraints that a verb places onto its arguments. Consider the following example. (7) My car drinks gasoline. (Wilks, 1978) The verb drink normally takes an animate subject and a liquid object. Therefore, drink taking a car as a subject is an anomaly, which may in turn indicate the metaphorical use of drink. 3 Automatic Metaphor Recognition One of the first attempts to identify and interpret metaphorical expressions in text automatically is the approach of Fass (1991). It originates in the work of Wilks (1978) and utilizes handcoded knowledge. Fass (1991) developed a system called met*, capable of discriminating between literalness, metonymy, metaphor and anomaly. It does this in three stages. First, literalness is distinguished from non-literalness using selectional preference violation as an indicator. In the case that non-literalness is detected, the respective phrase is tested for being a metonymic relation using hand-coded patterns (such as CONTAINERfor-CONTENT). If the system fails to recognize metonymy, it proceeds to search the knowledge base for a relevant analogy in order to discriminate metaphorical relations from anomalous ones. E.g., the sentence in (7) would be represented in this framework as (car,drink,gasoline), which does not satisfy the preference (animal,drink,liquid), as car 689 is not a hyponym of animal. met* then searches its knowledge base for a triple containing a hypernym of both the actual argument and the desired argument and finds (thing,use,energy source), which represents the metaphorical interpretation. However, Fass himself indicated a problem with the selectional preference violation approach applied to metaphor recognition. The approach detects any kind of non-literalness or anomaly in language (metaphors, metonymies and others), and not only metaphors, i.e., it overgenerates. The methods met* uses to differentiate between those are mainly based on hand-coded knowledge, which implies a number of limitations. Another problem with this approach arises from the high conventionality of metaphor in language. This means that some metaphorical senses are very common. As a result the system would extract selectional preference distributions skewed towards such conventional metaphorical senses of the verb or one of its arguments. Therefore, although some expressions may be fully metaphorical in nature, no selectional preference violation can be detected in their use. Another counterargument is bound to the fact that interpretation is always context dependent, e.g. the phrase all men are animals can be used metaphorically, however, without any violation of selectional restrictions. Goatly (1997) addresses the phenomenon of metaphor by identifying a set of linguistic cues indicating it. He gives examples of lexical patterns indicating the presence of a metaphorical expression, such as metaphorically speaking, utterly, completely, so to speak and, surprisingly, literally. Such cues would probably not be enough for metaphor extraction on their own, but could contribute to a more complex system. The work of Peters and Peters (2000) concentrates on detecting figurative language in lexical resources. They mine WordNet (Fellbaum, 1998) for the examples of systematic polysemy, which allows to capture metonymic and metaphorical relations. The authors search for nodes that are relatively high up in the WordNet hierarchy and that share a set of common word forms among their descendants. Peters and Peters found that such nodes often happen to be in metonymic (e.g. publication – publisher) or metaphorical (e.g. supporting structure – theory) relation. The CorMet system discussed in (Mason, 2004) is the first attempt to discover source-target domain mappings automatically. This is done by “finding systematic variations in domain-specific selectional preferences, which are inferred from large, dynamically mined Internet corpora”. For example, Mason collects texts from the LAB domain and the FINANCE domain, in both of which pour would be a characteristic verb. In the LAB domain pour has a strong selectional preference for objects of type liquid, whereas in the FINANCE domain it selects for money. From this Mason’s system infers the domain mapping FINANCE – LAB and the concept mapping money – liquid. He compares the output of his system against the Master Metaphor List (Lakoff et al., 1991) containing hand-crafted metaphorical mappings between concepts. Mason reports an accuracy of 77%, although it should be noted that as any evaluation that is done by hand it contains an element of subjectivity. Birke and Sarkar (2006) present a sentence clustering approach for non-literal language recognition implemented in the TroFi system (Trope Finder). This idea originates from a similaritybased word sense disambiguation method developed by Karov and Edelman (1998). The method employs a set of seed sentences, where the senses are annotated; computes similarity between the sentence containing the word to be disambiguated and all of the seed sentences and selects the sense corresponding to the annotation in the most similar seed sentences. Birke and Sarkar (2006) adapt this algorithm to perform a two-way classification: literal vs. non-literal, and they do not clearly define the kinds of tropes they aim to discover. They attain a performance of 53.8% in terms of f-score. The method of Gedigan et al. (2006) discriminates between literal and metaphorical use. They trained a maximum entropy classifier for this purpose. They obtained their data by extracting the lexical items whose frames are related to MOTION and CURE from FrameNet (Fillmore et al., 2003). Then they searched the PropBank Wall Street Journal corpus (Kingsbury and Palmer, 2002) for sentences containing such lexical items and annotated them with respect to metaphoricity. They used PropBank annotation (arguments and their semantic types) as features to train the classifier and report an accuracy of 95.12%. This result is, however, only a little higher than the performance of the naive baseline assigning majority class to all instances (92.90%). These numbers 690 can be explained by the fact that 92.00% of the verbs of MOTION and CURE in the Wall Street Journal corpus are used metaphorically, thus making the dataset unbalanced with respect to the target categories and the task notably easier. Both Birke and Sarkar (2006) and Gedigan et al. (2006) focus only on metaphors expressed by a verb. As opposed to that the approach of Krishnakumaran and Zhu (2007) deals with verbs, nouns and adjectives as parts of speech. They use hyponymy relation in WordNet and word bigram counts to predict metaphors at a sentence level. Given an IS-A metaphor (e.g. The world is a stage3) they verify if the two nouns involved are in hyponymy relation in WordNet, and if they are not then this sentence is tagged as containing a metaphor. Along with this they consider expressions containing a verb or an adjective used metaphorically (e.g. He planted good ideas in their minds or He has a fertile imagination). Hereby they calculate bigram probabilities of verb-noun and adjective-noun pairs (including the hyponyms/hypernyms of the noun in question). If the combination is not observed in the data with sufficient frequency, the system tags the sentence containing it as metaphorical. This idea is a modification of the selectional preference view of Wilks. However, by using bigram counts over verb-noun pairs Krishnakumaran and Zhu (2007) loose a great deal of information compared to a system extracting verb-object relations from parsed text. The authors evaluated their system on a set of example sentences compiled from the Master Metaphor List (Lakoff et al., 1991), whereby highly conventionalized metaphors (they call them dead metaphors) are taken to be negative examples. Thus they do not deal with literal examples as such: essentially, the distinction they are making is between the senses included in WordNet, even if they are conventional metaphors, and those not included in WordNet. 4 Automatic Metaphor Interpretation Almost simultaneously with the work of Fass (1991), Martin (1990) presents a Metaphor Interpretation, Denotation and Acquisition System (MIDAS). In this work Martin captures hierarchical organisation of conventional metaphors. The idea behind this is that the more specific conventional metaphors descend from the general ones. 3William Shakespeare Given an example of a metaphorical expression, MIDAS searches its database for a corresponding metaphor that would explain the anomaly. If it does not find any, it abstracts from the example to more general concepts and repeats the search. If it finds a suitable general metaphor, it creates a mapping for its descendant, a more specific metaphor, based on this example. This is also how novel metaphors are acquired. MIDAS has been integrated with the Unix Consultant (UC), the system that answers users questions about Unix. The UC first tries to find a literal answer to the question. If it is not able to, it calls MIDAS which detects metaphorical expressions via selectional preference violation and searches its database for a metaphor explaining the anomaly in the question. Another cohort of approaches relies on performing inferences about entities and events in the source and target domains for metaphor interpretation. These include the KARMA system (Narayanan, 1997; Narayanan, 1999; Feldman and Narayanan, 2004) and the ATT-Meta project (Barnden and Lee, 2002; Agerri et al., 2007). Within both systems the authors developed a metaphor-based reasoning framework in accordance with the theory of conceptual metaphor. The reasoning process relies on manually coded knowledge about the world and operates mainly in the source domain. The results are then projected onto the target domain using the conceptual mapping representation. The ATT-Meta project concerns metaphorical and metonymic description of mental states and reasoning about mental states using first order logic. Their system, however, does not take natural language sentences as input, but logical expressions that are representations of small discourse fragments. KARMA in turn deals with a broad range of abstract actions and events and takes parsed text as input. Veale and Hao (2008) derive a “fluid knowledge representation for metaphor interpretation and generation”, called Talking Points. Talking Points are a set of characteristics of concepts belonging to source and target domains and related facts about the world which the authors acquire automatically from WordNet and from the web. Talking Points are then organized in Slipnet, a framework that allows for a number of insertions, deletions and substitutions in definitions of such characteristics in order to establish a connection between the target and the source 691 concepts. This work builds on the idea of slippage in knowledge representation for understanding analogies in abstract domains (Hofstadter and Mitchell, 1994; Hofstadter, 1995). Below is an example demonstrating how slippage operates to explain the metaphor Make-up is a Western burqa. Make-up => ≡typically worn by women ≈expected to be worn by women ≈must be worn by women ≈must be worn by Muslim women Burqa <= By doing insertions and substitutions the system arrives from the definition typically worn by women to that of must be worn by Muslim women, and thus establishes a link between the concepts of make-up and burqa. Veale and Hao (2008), however, did not evaluate to which extent their knowledge base of Talking Points and the associated reasoning framework are useful to interpret metaphorical expressions occurring in text. Shutova (2010) defines metaphor interpretation as a paraphrasing task and presents a method for deriving literal paraphrases for metaphorical expressions from the BNC. For example, for the metaphors in “All of this stirred an unfathomable excitement in her” or “a carelessly leaked report” their system produces interpretations “All of this provoked an unfathomable excitement in her” and “a carelessly disclosed report” respectively. They first apply a probabilistic model to rank all possible paraphrases for the metaphorical expression given the context; and then use automatically induced selectional preferences to discriminate between figurative and literal paraphrases. The selectional preference distribution is defined in terms of selectional association measure introduced by Resnik (1993) over the noun classes automatically produced by Sun and Korhonen (2009). Shutova (2010) tested their system only on metaphors expressed by a verb and report a paraphrasing accuracy of 0.81. 5 Metaphor Resources Metaphor is a knowledge-hungry phenomenon. Hence there is a need for either an extensive manually-created knowledge-base or a robust knowledge acquisition system for interpretation of metaphorical expressions. The latter being a hard task, a great deal of metaphor research resorted to the first option. Although hand-coded knowledge proved useful for metaphor interpretation (Fass, 1991; Martin, 1990), it should be noted that the systems utilizing it have a very limited coverage. One of the first attempts to create a multipurpose knowledge base of source–target domain mappings is the Master Metaphor List (Lakoff et al., 1991). It includes a classification of metaphorical mappings (mainly those related to mind, feelings and emotions) with the corresponding examples of language use. This resource has been criticized for the lack of clear structuring principles of the mapping ontology (L¨onneker-Rodman, 2008). The taxonomical levels are often confused, and the same classes are referred to by different class labels. This fact and the chosen data representation in the Master Metaphor List make it not suitable for computational use. However, both the idea of the list and its actual mappings ontology inspired the creation of other metaphor resources. The most prominent of them are MetaBank (Martin, 1994) and the Mental Metaphor Databank4 created in the framework of the ATT-meta project (Barnden and Lee, 2002; Agerri et al., 2007). The MetaBank is a knowledge-base of English metaphorical conventions, represented in the form of metaphor maps (Martin, 1988) containing detailed information about source-target concept mappings backed by empirical evidence. The ATT-meta project databank contains a large number of examples of metaphors of mind classified by source–target domain mappings taken from the Master Metaphor List. Along with this it is worth mentioning metaphor resources in languages other than English. There has been a wealth of research on metaphor in Spanish, Chinese, Russian, German, French and Italian. The Hamburg Metaphor Database (L¨onneker, 2004; Reining and L¨onneker-Rodman, 2007) contains examples of metaphorical expressions in German and French, which are mapped to senses from EuroWordNet5 and annotated with source–target domain mappings taken from the Master Metaphor List. Alonge and Castelli (2003) discuss how metaphors can be represented in ItalWordNet for 4http://www.cs.bham.ac.uk/∼jab/ATT-Meta/Databank/ 5EuroWordNet is a multilingual database with wordnets for several European languages (Dutch, Italian, Spanish, German, French, Czech and Estonian). The wordnets are structured in the same way as the Princeton WordNet for English. URL: http://www.illc.uva.nl/EuroWordNet/ 692 Italian and motivate this by linguistic evidence. Encoding metaphorical information in generaldomain lexical resources for English, e.g. WordNet (L¨onneker and Eilts, 2004), would undoubtedly provide a new platform for experiments and enable researchers to directly compare their results. 6 Metaphor Annotation in Corpora To reflect two distinct aspects of the phenomenon, metaphor annotation can be split into two stages: identifying metaphorical senses in text (akin word sense disambiguation) and annotating source – target domain mappings underlying the production of metaphorical expressions. Traditional approaches to metaphor annotation include manual search for lexical items used metaphorically (Pragglejaz Group, 2007), for source and target domain vocabulary (Deignan, 2006; Koivisto-Alanko and Tissari, 2006; Martin, 2006) or for linguistic markers of metaphor (Goatly, 1997). Although there is a consensus in the research community that the phenomenon of metaphor is not restricted to similarity-based extensions of meanings of isolated words, but rather involves reconceptualization of a whole area of experience in terms of another, there still has been surprisingly little interest in annotation of cross-domain mappings. However, a corpus annotated for conceptual mappings could provide a new starting point for both linguistic and cognitive experiments. 6.1 Metaphor and Polysemy The theorists of metaphor distinguish between two kinds of metaphorical language: novel (or poetic) metaphors, that surprise our imagination, and conventionalized metaphors, that become a part of an ordinary discourse. “Metaphors begin their lives as novel poetic creations with marked rhetorical effects, whose comprehension requires a special imaginative leap. As time goes by, they become a part of general usage, their comprehension becomes more automatic, and their rhetorical effect is dulled” (Nunberg, 1987). Following Orwell (1946) Nunberg calls such metaphors “dead” and claims that they are not psychologically distinct from literally-used terms. This scheme demonstrates how metaphorical associations capture some generalisations governing polysemy: over time some of the aspects of the target domain are added to the meaning of a term in a source domain, resulting in a (metaphorical) sense extension of this term. Copestake and Briscoe (1995) discuss sense extension mainly based on metonymic examples and model the phenomenon using lexical rules encoding metonymic patterns. Along with this they suggest that similar mechanisms can be used to account for metaphoric processes, and the conceptual mappings encoded in the sense extension rules would define the limits to the possible shifts in meaning. However, it is often unclear if a metaphorical instance is a case of broadening of the sense in context due to general vagueness in language, or it manifests a formation of a new distinct metaphorical sense. Consider the following examples. (8) a. As soon as I entered the room I noticed the difference. b. How can I enter Emacs? (9) a. My tea is cold. b. He is such a cold person. Enter in (8a) is defined as “to go or come into a place, building, room, etc.; to pass within the boundaries of a country, region, portion of space, medium, etc.”6 In (8b) this sense stretches to describe dealing with software, whereby COMPUTER PROGRAMS are viewed as PHYSICAL SPACES. However, this extended sense of enter does not appear to be sufficiently distinct or conventional to be included into the dictionary, although this could happen over time. The sentence (9a) exemplifies the basic sense of cold – “of a temperature sensibly lower than that of the living human body”, whereas cold in (9b) should be interpreted metaphorically as “void of ardour, warmth, or intensity of feeling; lacking enthusiasm, heartiness, or zeal; indifferent, apathetic”. These two senses are clearly linked via the metaphoric mapping between EMOTIONAL STATES and TEMPERATURES. A number of metaphorical senses are included in WordNet, however without any accompanying semantic annotation. 6.2 Metaphor Identification 6.2.1 Pragglejaz Procedure Pragglejaz Group (2007) proposes a metaphor identification procedure (MIP) within the frame6Sense definitions are taken from the Oxford English Dictionary. 693 work of the Metaphor in Discourse project (Steen, 2007). The procedure involves metaphor annotation at the word level as opposed to identifying metaphorical relations (between words) or source– target domain mappings (between concepts or domains). In order to discriminate between the verbs used metaphorically and literally the annotators are asked to follow the guidelines: 1. For each verb establish its meaning in context and try to imagine a more basic meaning of this verb on other contexts. Basic meanings normally are: (1) more concrete; (2) related to bodily action; (3) more precise (as opposed to vague); (4) historically older. 2. If you can establish the basic meaning that is distinct from the meaning of the verb in this context, the verb is likely to be used metaphorically. Such annotation can be viewed as a form of word sense disambiguation with an emphasis on metaphoricity. 6.2.2 Source – Target Domain Vocabulary Another popular method that has been used to extract metaphors is searching for sentences containing lexical items from the source domain, the target domain, or both (Stefanowitsch, 2006). This method requires exhaustive lists of source and target domain vocabulary. Martin (2006) conducted a corpus study in order to confirm that metaphorical expressions occur in text in contexts containing such lexical items. He performed his analysis on the data from the Wall Street Journal (WSJ) corpus and focused on four conceptual metaphors that occur with considerable regularity in the corpus. These include NUMERICAL VALUE AS LOCATION, COMMERCIAL ACTIVITY AS CONTAINER, COMMERCIAL ACTIVITY AS PATH FOLLOWING and COMMERCIAL ACTIVITY AS WAR. Martin manually compiled the lists of terms characteristic for each domain by examining sampled metaphors of these types and then augmented them through the use of thesaurus. He then searched the WSJ for sentences containing vocabulary from these lists and checked whether they contain metaphors of the above types. The goal of this study was to evaluate predictive ability of contexts containing vocabulary from (1) source domain and (2) target domain, as well as (3) estimating the likelihood of a metaphorical expression following another metaphorical expression described by the same mapping. He obtained the most positive results for metaphors of the type NUMERICAL-VALUEAS-LOCATION (P(Metaphor|Source) = 0.069, P(Metaphor|Target) = 0.677, P(Metaphor|Metaphor) = 0.703). 6.3 Annotating Source and Target Domains Wallington et al. (2003) carried out a metaphor annotation experiment in the framework of the ATTMeta project. They employed two teams of annotators. Team A was asked to annotate “interesting stretches”, whereby a phrase was considered interesting if (1) its significance in the document was non-physical, (2) it could have a physical significance in another context with a similar syntactic frame, (3) this physical significance was related to the abstract one. Team B had to annotate phrases according to their own intuitive definition of metaphor. Besides metaphorical expressions Wallington et al. (2003) attempted to annotate the involved source – target domain mappings. The annotators were given a set of mappings from the Master Metaphor List and were asked to assign the most suitable ones to the examples. However, the authors do not report the level of interannotator agreement nor the coverage of the mappings in the Master Metaphor List on their data. Shutova and Teufel (2010) adopt a different approach to the annotation of source – target domain mappings. They do not rely on predefined mappings, but instead derive independent sets of most common source and target categories. They propose a two stage procedure, whereby the metaphorical expressions are first identified using MIP, and then the source domain (where the basic sense comes from) and the target domain (the given context) are selected from the lists of categories. Shutova and Teufel (2010) report interannotator agreement of 0.61 (κ). 7 Conclusion and Future Directions The eighties and nineties provided us with a wealth of ideas on the structure and mechanisms of the phenomenon of metaphor. The approaches formulated back then are still highly influential, although their use of hand-coded knowledge is becoming increasingly less convincing. The last decade witnessed a high technological leap in 694 natural language computation, whereby manually crafted rules gradually give way to more robust corpus-based statistical methods. This is also the case for metaphor research. The latest developments in the lexical acquisition technology will in the near future enable fully automated corpusbased processing of metaphor. However, there is still a clear need in a unified metaphor annotation procedure and creation of a large publicly available metaphor corpus. Given such a resource the computational work on metaphor is likely to proceed along the following lines: (1) automatic acquisition of an extensive set of valid metaphorical associations from linguistic data via statistical pattern matching; (2) using the knowledge of these associations for metaphor recognition in the unseen unrestricted text and, finally, (3) interpretation of the identified metaphorical expressions by deriving the closest literal paraphrase (a representation that can be directly embedded in other NLP applications to enhance their performance). Besides making our thoughts more vivid and filling our communication with richer imagery, metaphors also play an important structural role in our cognition. Thus, one of the long term goals of metaphor research in NLP and AI would be to build a computational intelligence model accounting for the way metaphors organize our conceptual system, in terms of which we think and act. Acknowledgments I would like to thank Anna Korhonen and my reviewers for their most helpful feedback on this paper. The support of Cambridge Overseas Trust, who fully funds my studies, is gratefully acknowledged. References R. Agerri, J.A. Barnden, M.G. Lee, and A.M. Wallington. 2007. Metaphor, inference and domainindependent mappings. In Proceedings of RANLP2007, pages 17–23, Borovets, Bulgaria. A. Alonge and M. Castelli. 2003. Encoding information on metaphoric expressions in WordNet-like resources. In Proceedings of the ACL 2003 Workshop on Lexicon and Figurative Language, pages 10–17. J.A. Barnden and M.G. Lee. 2002. An artificial intelligence approach to metaphor understanding. Theoria et Historia Scientiarum, 6(1):399–412. J. Birke and A. Sarkar. 2006. A clustering approach for the nearly unsupervised recognition of nonliteral language. In In Proceedings of EACL-06, pages 329–336. M. Black. 1962. Models and Metaphors. Cornell University Press. L. Burnard. 2007. Reference Guide for the British National Corpus (XML Edition). A. Copestake and T. Briscoe. 1995. Semi-productive polysemy and sense extension. Journal of Semantics, 12:15–67. A. Deignan. 2006. The grammar of linguistic metaphors. In A. Stefanowitsch and S. T. Gries, editors, Corpus-Based Approaches to Metaphor and Metonymy, Berlin. Mouton de Gruyter. D. Fass. 1991. met*: A method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49–90. J. Feldman and S. Narayanan. 2004. Embodied meaning in a neural theory of language. Brain and Language, 89(2):385–392. C. Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database (ISBN: 0-262-06197-X). MIT Press, first edition. C. J. Fillmore, C. R. Johnson, and M. R. L. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16(3):235–250. M. Gedigan, J. Bryant, S. Narayanan, and B. Ciric. 2006. Catching metaphors. In In Proceedings of the 3rd Workshop on Scalable Natural Language Understanding, pages 41–48, New York. D. Gentner. 1983. Structure mapping: A theoretical framework for analogy. Cognitive Science, 7:155– 170. R. Gibbs. 1984. Literal meaning and psychological theory. Cognitive Science, 8:275–304. A. Goatly. 1997. The Language of Metaphors. Routledge, London. M. Hesse. 1966. Models and Analogies in Science. Notre Dame University Press. D. Hofstadter and M. Mitchell. 1994. The Copycat Project: A model of mental fluidity and analogymaking. In K.J. Holyoak and J. A. Barnden, editors, Advances in Connectionist and Neural Computation Theory, Ablex, New Jersey. D. Hofstadter. 1995. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. HarperCollins Publishers. Y. Karov and S. Edelman. 1998. Similarity-based word sense disambiguation. Computational Linguistics, 24(1):41–59. 695 P. Kingsbury and M. Palmer. 2002. From TreeBank to PropBank. In Proceedings of LREC-2002, Gran Canaria, Canary Islands, Spain. P. Koivisto-Alanko and H. Tissari. 2006. Sense and sensibility: Rational thought versus emotion in metaphorical language. In A. Stefanowitsch and S. T. Gries, editors, Corpus-Based Approaches to Metaphor and Metonymy, Berlin. Mouton de Gruyter. S. Krishnakumaran and X. Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proceedings of the Workshop on Computational Approaches to Figurative Language, pages 13–20, Rochester, NY. G. Lakoff and M. Johnson. 1980. Metaphors We Live By. University of Chicago Press, Chicago. G. Lakoff, J. Espenson, and A. Schwartz. 1991. The master metaphor list. Technical report, University of California at Berkeley. B. L¨onneker and C. Eilts. 2004. A Current Resource and Future Perspectives for Enriching WordNets with Metaphor Information. In Proceedings of the Second International WordNet Conference— GWC 2004, pages 157–162, Brno, Czech Republic. B. L¨onneker-Rodman. 2008. The hamburg metaphor database project: issues in resource creation. Language Resources and Evaluation, 42(3):293–318. B. L¨onneker. 2004. Lexical databases as resources for linguistic creativity: Focus on metaphor. In Proceedings of the LREC 2004 Workshop on Language Resources for Linguistic Creativity, pages 9–16, Lisbon, Portugal. J. H. Martin. 1988. Representing regularities in the metaphoric lexicon. In Proceedings of the 12th conference on Computational linguistics, pages 396– 401. J. H. Martin. 1990. A Computational Model of Metaphor Interpretation. Academic Press Professional, Inc., San Diego, CA, USA. J. H. Martin. 1994. Metabank: A knowledge-base of metaphoric language conventions. Computational Intelligence, 10:134–149. J. H. Martin. 2006. A corpus-based analysis of context effects on metaphor comprehension. In A. Stefanowitsch and S. T. Gries, editors, Corpus-Based Approaches to Metaphor and Metonymy, Berlin. Mouton de Gruyter. Z. J. Mason. 2004. Cormet: a computational, corpus-based conventional metaphor extraction system. Computational Linguistics, 30(1):23–44. S. Narayanan. 1997. Knowledge-based action representations for metaphor and aspect (karma. Technical report, PhD thesis, University of California at Berkeley. S. Narayanan. 1999. Moving right along: A computational model of metaphoric reasoning about events. In Proceedings of AAAI 99), pages 121–128, Orlando, Florida. G. Nunberg. 1987. Poetic and prosaic metaphors. In Proceedings of the 1987 workshop on Theoretical issues in natural language processing, pages 198– 201. G. Orwell. 1946. Politics and the english language. Horizon. W. Peters and I. Peters. 2000. Lexicalised systematic polysemy in wordnet. In Proceedings of LREC 2000, Athens. Pragglejaz Group. 2007. MIP: A method for identifying metaphorically used words in discourse. Metaphor and Symbol, 22:1–39. A. Reining and B. L¨onneker-Rodman. 2007. Corpusdriven metaphor harvesting. In Proceedings of the HLT/NAACL-07 Workshop on Computational Approaches to Figurative Language, pages 5–12, Rochester, New York. P. Resnik. 1993. Selection and Information: A Classbased Approach to Lexical Relationships. Ph.D. thesis, Philadelphia, PA, USA. E. Shutova and S. Teufel. 2010. Metaphor corpus annotated for source - target domain mappings. In Proceedings of LREC 2010, Malta. E. Shutova. 2010. Automatic metaphor interpretation as a paraphrasing task. In Proceedings of NAACL 2010, Los Angeles, USA. G. J. Steen. 2007. Finding metaphor in discourse: Pragglejaz and beyond. Cultura, Lenguaje y Representacion / Culture, Language and Representation (CLR), Revista de Estudios Culturales de la Universitat Jaume I, 5:9–26. A. Stefanowitsch. 2006. Corpus-based approaches to metaphor and metonymy. In A. Stefanowitsch and S. T. Gries, editors, Corpus-Based Approaches to Metaphor and Metonymy, Berlin. Mouton de Gruyter. L. Sun and A. Korhonen. 2009. Improving verb clustering with automatically acquired selectional preferences. In Proceedings of EMNLP 2009, pages 638–647, Singapore, August. R. Tourangeau and R. Sternberg. 1982. Understanding and appreciating metaphors. Cognition, 11:203– 244. T. Veale and Y. Hao. 2008. A fluid knowledge representation for understanding and generating creative metaphors. In Proceedings of COLING 2008, pages 945–952, Manchester, UK. 696 A. M. Wallington, J. A. Barnden, P. Buchlovsky, L. Fellows, and S. R. Glasbey. 2003. Metaphor annotation: A systematic study. Technical report, School of Computer Science, The University of Birmingham. Y. Wilks. 1975. A preferential pattern-seeking semantics for natural language inference. Artificial Intelligence, 6:53–74. Y. Wilks. 1978. Making preferences more active. Artificial Intelligence, 11(3):197–223. 697
2010
71
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 698–709, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Game-Theoretic Model of Metaphorical Bargaining Beata Beigman Klebanov Kellogg School of Management Northwestern University [email protected] Eyal Beigman Washington University in St. Louis [email protected] Abstract We present a game-theoretic model of bargaining over a metaphor in the context of political communication, find its equilibrium, and use it to rationalize observed linguistic behavior. We argue that game theory is well suited for modeling discourse as a dynamic resulting from a number of conflicting pressures, and suggest applications of interest to computational linguists. 1 Introduction A 13 Dec 1992 article in The Times starts thus: The European train chugged out of the station last night; for most of the day it looked as if it might be stalled there for some time. It managed to pull away at around 10:30 pm only after the Spanish prime minister, Felipe Gonzalez, forced the passengers in the first class carriages into a last minute whip round to sweeten the trip for the European Community’s poor four: Spain, Portugal, Greece and Ireland. The fat controller, Helmut Kohl, beamed with satisfaction as the deal was done. The elegantlysuited Francois Mitterrand was equally satisfied. But nobody was as pleased as John Major, stationmaster for the UK presidency, for whom the agreement marked a scarce high point in a battered premiership. The departure had actually been delayed by seven months by Danes on the line. Just when that problem was solved, there was the voluble outbreak, orchestrated by Spain, from the poor four passengers demanding that they should travel free and be given spending money, too. The coupling of the carriages may not be reliably secure but the pan-European express is in motion. That few seem to agree the destination suggests that future arguments are inevitable at every set of points. Next stop: Copenhagen. Apart from an entertaining read, the extended metaphor provides an elaborate conceptual correspondence between a familiar domain of train journeys and the unfolding process of European integration. Carriages are likened to nation states; passengers to their peoples; treaties to stations; politicians to responsible rail company employees. In a compact form, the metaphor gives expression to both the small and the large scale of the process. It provides for the recent history: Denmark’s failure to ratify the 1992 Maastricht treaty until opt-outs were negotiated later that year is compared to dissenters sabotaging the journey by laying on the tracks (Danes on the line); negotiations over the Cohesion Fund that would provide less developed regions with financial aid to help them comply with convergence criteria are likened to second class carriages with poor passengers for whom the journey had to be subsidized. At a more general level, the European integration is a purposeful movement towards some destination according to a worked out plan, getting safely through negotiation and implementation from one treaty to another, as a train moving on its rails through subsequent stations, with each nation being separate yet tied with everyone else. Numerous inferences regarding speed, timetables, stations, passengers, different classes of tickets, temporary obstacles on the tracks, and so on can be made by the reader based on the knowledge of train journeys, giving him or her a feeling of an enhanced understanding1 of the highly complex process of European integration. So apt was the metaphor that political fights were waged over its details (Musolff, 2000). Worries about destination were given an eloquent expression by Margaret Thatcher (Sunday Times, 20 Sept 1992): She warned EC leaders to stop their endless round of summits and take notice of their own people. “There is a fear that the European train will thunder forward, laden with its customary cargo of gravy, towards a destination neither wished for nor understood by electorates. But the train can be stopped,” she said. 1More on enhanced understanding in sections 3.2 and 4.2. 698 The metaphor proved flexible enough for further elaboration. John Major, a Conservative PM of Britain, spoke on June 1st, 1994 about his vision of the decision making at the EU level, saying that he had never believed that Europe must act as one on every issue, and advocating “a sensible new approach, varying when it needs to, multitrack, multi-speed, multi-layered.” He attempted to turn a largely negative Conservative take on the European train (see Thatcher above) into a tenable positive vision — each nation-carriage is now presumably a rather autonomous entity, waiting on a side track for the right locomotive, in a huge yet smoothly operating railroad system. Major’s political opponents offered their counter-frames. In both cases, the imagery of a large transportation system was taken up, yet turned around to suggest that “multi, for everyone” amounts to Britain being in “the slow lane,” and a different image was suggested that makes the negative evaluation of Britain’s opt-outs more poignant — a football metaphor, where relegation to the second division is a sign of a weak performance, and a school metaphor, where Britain is portrayed as an under-achiever: John Cunningham, Labour He has admitted that his Government would let Britain fall behind in Europe. He is apparently willing to offer voluntary relegation to the second division in Europe, and he isn’t even prepared to put up a fight. I believe that in any two-speed Europe, Britain must be up with those in the fast lane. Clearly Mr Major does not. Paddy Ashdown, Liberal Democrat Are you really saying that the best that Britain can hope for under your leadership is ... the slow lane of a two-speed Europe? Most people in this country will want to aim higher, and will reject your view of a ‘drop-out’ Britain. The pro-European camp rallied around the “Britain in the slow lane” version as a critical stance towards the government’s European policy. Of the alternative metaphors, the school metaphor has some traction in the Euro discourse, where the European (mainly German) financial officers are compared to school authorities, and governments struggling to meet the strict convergence criteria to enter the Euro are compared to pupils that barely make the grade with Britain as a ‘drop-out’ who gave up even trying (Musolff, 2000). The fact that European policy is being communicated and negotiated via a metaphor is not surprising; after all, “there is always someone willing to help us think by providing us with a metaphor that accords with HIS views.”2 From the point of view of the dynamics of political discourse, the puzzle is rather the apparent tendency of politicians to be compelled by the rival’s metaphorical framework. Thatcher tries to turn the train metaphor used by the pro-EU camp around. Yet, assuming metaphors are matters of choice, why should Thatcher feel constrained by her rival’s choice, why doesn’t she ignore it and merely suggest a new metaphor of her own design? As the evidence above suggests, this is not Thatcher’s idiosyncrasy, as Major and his rivals acted similarly. Can this dynamic be explained? In this article, we use the explanatory framework of game theory, seeking to rationalize the observed behavior by designing a game that would produce, at equilibrium, the observed dynamics. Specifically, we formalize the notion that the price of “locking” the public into a metaphorical frame of reference is that a politician is coerced into staying within the metaphor as well, even if he or she is at the receiving end of a rival’s rhetorical move. Since the use of game theory is not common in computational linguistics, we first explain its main attributes, justify our decision to make use of it, and draw connections to research questions that can benefit from its application (section 2). Next, we design the game of bargaining over a metaphor, and find its equilibrium (section 3), followed by a discussion (section 4). 2 Game-Theoretic models The basic construct is that of a game, that is, a model of participants in an interaction (called “players”), their goals (or “utilities”) and allowable moves. Different moves yield different utilities for a player; it is assumed that each player would pick a strategy that maximizes her utility. The observable is the actual sequence of moves; importantly, these are assumed to be the optimal outcome (an equilibrium) of the relevant game. A popular notion of equilibrium is Nash equilibrium (Nash, 1950). For extensive form games (the type employed in this paper), the notion of subgame perfect equilibirum is typically used, denoting a Nash equilibrium that would remain such if the players start from any stage of the evolving game (Selten (1975; 1965)). The task of a game theorist is to reverseengineer the model for which the observed se2Capitalization in the original, Bolinger (1980, p. 146). 699 quence of actions is an equilibrium. The resulting model is thereby able to rationalize the observed behavior as a naturally emerging dynamics between agents maximizing certain utility functions. In economics, game-theoretic models are used to explain price change, organization of production, and market failures (Mas-Colell et al., 1995; von Neumann and Morgenstern, 1944); in biology — the operation of natural selection processes (Axelrod and Hamilton, 1981; Maynard Smith and Price, 1973); in social sciences — political institutions, collective action, and conflict (Greif, 2006; Schelling, 1997; North, 1990). In recent applications in linguistics, pragmatic phenoma such as implicatures are rendered as an equilibrium outcome of a communication game (J¨ager and Ebert, 2008; van Rooij, 2008; Ross, 2007; van Rooij and Schulz, 2004; Parikh, 2001; Glazer and Rubinstein, 2001; Dekker and van Rooy, 2000). Computing equilibria is simple for some games and quite evolved for others. For example, computing the equilibrium of a zero-sum game is equivalent to LP optimization (Luce and Raiffa, 1957); an equilibrium of general bimatrix games can be found using a pivoting algorithm (von Stengel, 2007; Lemke and Howson, 1964). Interesting connections have been pointed out between game theory and machine learning: Freund and Schapire (1996) present both online learning and boosting as a repeated zero-sum game; Shalev-Shwartz and Singer (2006) show similarly that loss minimization in online learning is akin to an equilibrium path in a repeated game. While game theoretic models are not much utilized in computational linguistics, they are quite attractive to tackle some of the problems computational linguists are interested in. For example, generation of referring expressions (Paraboni et al., 2007; Gardent et al., 2004; Siddharthan and Copestake, 2004; Dale and Reiter, 1995) can be rendered as a communication game with utility functions that reflect pressures to use shorter expressions while avoiding excessive ambiguity (Clark and Parikh, 2007), with corpora annotated for entity mentions informing the design of a model. Generally, computational linguistics research produces algorithms to detect entities of various kinds, be it topics, named entities, metaphors, moves in a multi-party conversations, or syntactic constructions in large corpora; such primary data can be used to trace developments not only in chronological terms (Gruhl et al., 2004; Allan, 2002), but in strategic terms, i.e. in terms that reflect agendas of the actors, such as political agendas in legislatures (Quinn et al., 2006) or activist forums (Greene and Resnik, 2009), research agendas in group meetings (Morgan et al., 2001), or social agendas in speed-dates (Jurafsky et al., 2009). Game theoretical models are well suited for modeling dynamics that emerge under multiple, possibly conflicting constraints, as we exemplify in this article. 3 The model We extend Rubinstein (1982) model of negotiation through offers and counter-offers between two players with a public benefit constraint. The model consists of (1) two players representing the opposing sides, (2) a set of frames X⊂Rn compact and convex, (3) preference relations described by continuous utility functions U1, U2:X→R+, (4) a sequence of frames X0⊂X1 . . .⊂2X that can be suggested to the public, and (5) a sequence of public preferences over frames in Xt for t=0, 1, 2, . . . described by a public utility function Up t . The game proceeds as follows. Initially the frame is F0=X. In odd rounds player 1 appeals to the public with a frame A1 t ∈Xt|Ft, Xt|Ft={A∈Xt : A⊂Ft}, player 2 counters with a frame A2 t ∈Xt|Ft. The public chooses one of the frames based on Up t (Ai t) with ties broken in 1’s favor. The accepted frame becomes the current frame for the next round Ft+1. In even rounds the parts of players 1 and 2 are reversed. A finite sequence F0, . . . , Ft−1 gives the history of the bargaining process up to t. A strategy σi of player i is a function specifying for any history h={F0, . . . , Ft−1} the move player i makes at time t, namely the frame Ai t she chooses to address the public. A sequence F0, F1, F2, F3, . . . describes a path the bargaining process can take, leading to an outcome ∩∞ t=0Ft. The players’ utility for an outcome is given by Ui=limt→∞ R Ft Ui(x)dχFt for i=1, 2 where χFt is a probability measure on Ft. If ∩∞ t=0Ft={x} the utility is the point utility of x otherwise it is the expected utility on the intersection set. 3.1 Player utility For a given issue under discussion, such as European integration process, we order the possible 700 states of the world along a single dimension that spans the policy variations proposed by the different players (politicians). Politics of a single issue are routinely modeled as lying on a single dimension.3 In the British context, various configurations of the unfolding European reality are situated along the line between high degree of integration and complete separatism; Liberal Democrats are the most pro-European party, while United Kingdom Independence Party are at the far-right end of the scale, preferring British withdrawal from the EU. The two major parties, Labour and Conservatives (Tories), prefer intermediate left-leaning and right-leaning positions, respectively. A schematic description is shown in figure 1. !"# !$# %"# %$# !"# !$# %"# %$# LibDem! Labour! Tories! UKIP! !"#$%&'()"*+,*-+.$*'* #&'+"*/)0&"$12* … that is unfolding too fast … but it is possible to regulate the speed … in which case we’ll go slower than others !"# !$# %"# %$# Figure 1: Preferences on pro-anti Europe axis. The utilities of the different players can in this case be described as continuous single-peaked functions over an interval.4 Thus X=[0, 1], and the utility functions Ui(x)=φ(||x −vi||) for vi∈X where φ is a monotonically strictly decreasing function and || || is Euclidean distance. 3.2 Public utility We note the difference between two types of utilities: The utility of the players is over outcomes, the utility of the public is over sets of outcomes (frames). The latter does not represent a utility the public has for one outcome or another, but rather a utility it has for an enhanced understanding. Thus, the public’s utility from a frame is a function of the information content of the proposed frame relative to the current frame, i.e. the relative entropy of the two sets.5 Formally, if the accepted 3Indeed, Poole and Rosenthal (1997) argue that no more than two dimensions are needed to account for voting patterns on all issues in the US Congress. 4Single-peakedness is a common assumption in position modeling in political science (Downs, 1957). 5The notion that new beliefs are refinements of existing ones is current in contemporary theorizing about formation and change of beliefs, evaluations, and preferences. An update based on the latest available information is consistent with memory-based theories; in our model, in the equilibrium, the current frame contains information about the pathso-far, thus early stages of the bargaining processes are in some sense integrated into the current frame, compatible with the rival, online model of belief formation. See Druckman and Luria (2000) for a review of the relevant literature. frame at time t is Ft then for any Borel set A⊂Ft the public utility for A is Up t (A)=Π(Entt(A)) where Entt(A)=−µt(A) log µt(A) for a continuous probability measure µt on Ft and Π is a continuous, monotone ascending function; for A̸⊂Ft, Up t (A)=0. We take µt to be the relative length of the segment µt(A)= |A| |Ft|, hence the entropy maximizing subsegments are of length |Ft| 2 . 3.3 Game dynamics At every point in the game, a certain set of the states-of-affairs is being deemed sufficiently probable by the public to require consideration. Suppose that initially any state of affairs within the interval [0, 1] is assigned a uniform probability and thus merits public attention. Each in her turn, the players propose to the public to concentrate on a subset of the currently considered states of affairs, arguing that those are the likelier ones to obtain, hence merit further attention. The metaphor used to deliver the proposal describes the newly proposed subset in a way that makes those statesof-affairs that are in it aligned with the metaphor, whereas all other states are left out of the proposed metaphorical frame. As the game proceeds, the public attention is concentrated on successively smaller sets of eventualities, and these are given a more and more detailed metaphoric description, providing the educational gratification of increasingly knowing better and better what is going on. At each step, each player strives to provide maximum public gratification while leading the public to focus on the frame (i.e. subset of states of affairs) that best meets the player’s preferences.6 Figure 2 sketches the frame negotiation through train metaphor, from some point in time when the general train metaphor got established, through Thatcher’s flashing out the issue of excessive speed and unclear direction, Major’s multi-track corrective, and reply of his opponents on the left. The final frame has all those states of affairs that fit the extended metaphor – everyone is acting within the same broad system of rules, with Britain and perhaps others sometimes wanting to negotiate special, more gradual procedures, which would leave Britain less tightly integrated into the com6We note that in our model every utterance has an impact on the public for which the player bears the consequences and is therefore a (costly) strategic move in the game. This is different from models of cheap talk such as Aumann (1990), Lewis (1969) where communication is devoid of strategic moves and is used primarily as a coordination device. 701 munity than some other European partners. Integration is like a train journey… … that is unfolding too fast … but it is possible to regulate the speed … in which case we’ll go slower than others Figure 2: Bargaining over train metaphor. 3.4 The equilibrium A pair of strategies (σ1, σ2) is a Nash equilibrium if there is no deviation strategy σ such that (σ, σ2) leads to an outcome with higher utility for player 1 than outcome of (σ1, σ2) and the same for player 2. A subgame are all the possible moves following a history h={F0, . . . , Ft}, in our case it is equivalent to a game with an initial frame Ft and the corresponding utilities. A sub-strategy is that part of the original strategy that is a strategy on the subgame. A pair of strategies is a subgame perfect equilibrium if, for any subgame, their substrategies are a Nash equilibrium. Theorem 1 In the frame bargaining game with single-peaked preferences 1. There exists a canonical subgame perfect equilibrium path F0, F1, F2, . . . such that ∩∞ t=0Ft={x}. 2. For any subgame perfect equilibrium path F ′ 0, F ′ 1, F ′ 2, . . . there exists T such that ∩∞ t=0F ′ t=∩T t=0Ft. The theorem states that the outcome of the bargaining will always be a frame on the canonical path. The rivals would suggest more specific frames either until convergence or until a situation where any further specification would produce a frame that “misses their point,” so-to-speak, by removing too much of the favorable outcome space for both players. Figure 3 shows a situation where parties could decide to stall on the current frame: If player 1 has to choose between retaining F0, or playing F1 which would result in the rival’s playing F2, player 1 might choose to remain in F0 if the utility of any outcome of the subgame starting from F2 is lower than that of F0, as long as player 1 believes that player 2 would reason similarly. F0 F2 F1 !"# !$# Figure 3: Stalled bargaining. The idea of the proof is to construct a pair of strategies where each side attempts to pull the publicly accepted frame in the direction of its peak utility point. We show, assuming the peak of the first mover is to the left of peak of the second, that any deviation of the first mover would enable the second to shift the public frame more to the right, to an outcome of lower utility to the first mover. The full details of the proof of part 1 are given in the appendix; part 2 is proved in an accompanying technical report. The equilibrium exhibits the following properties: (a) a first mover’s advantage — for any player, the outcome would be closer to her peak point if she moves first than if she moves second; (b) a centrist’s advantage — if a player moves first and her peak is closer to the middle of the initial frame, she can derive a higher utility from the outcome than if her peak were further from the middle. Please see appendix for justifications. 4 Discussion 4.1 Political communication This article studies some properties of frame bargaining through metaphor in political communication, where rival politicians choose how to elaborate the current metaphor to educate the public about the ongoing situation in a way most consistent with their political preferences. Modeling the public preferences as highest relative entropy subset of possible states-of-affairs, we show that strategic choices by the politicians lead to a subgame perfect equilibrium where the less politically extreme player who moves first is at an advantage. In a democracy, such player would typically be the government, as the bulk of voters do not by definition vote for extreme views, and since the government is the agent that brings about changes in the current states of affairs, and is thus the first and most prepared to explain them to the public. Indeed, Entman’s model of frame activation in political discourse is hierarchical, with the govern702 ment (administration) being the topmost frameactivator, and opposition and media elites typically reacting to the administration’s frame (Entman, 2003). 4.2 Metaphor in political communication The role of metaphor in communication has long been a subject of interest, with views ranging from an ornament that beautifies the argument in the ancient rhetorical traditions, to the contemporary views of conceptual metaphor as permeating every aspect of life (Lakoff and Johnson, 1980). In political communication specifically, metaphor has long been known as a framing device. Framing can be defined as “selecting and highlighting some facets of events or issues, and making connections among them in order to promote a particular interpretation, evaluation, or solution” (Entman, 2003). Metaphors are notorious for allowing subliminal framing, where the metaphor seems so natural that the aspects of the phenomenon in question that do not align with the metaphor are seamlessly concealed. For example, WAR AS A COMPETITIVE GAME metaphor emphasizes the glory of winning and the shame of defeat, but hides the death-and-suffering aspect of the war, which makes sports metaphors a strategic choice when wishing to arouse a pro-war sentiment in the audience (Lakoff, 1991). Such subliminal framing can often be effectively contested by merely exposing the frame. Our examples show a different use of metaphor. Far from being subliminal or covert, the details of the metaphor, its implications, and the evaluation promoted by any given version are an important tool in the public discussion of a complex political issue. The function of metaphorical framing here resembles a pedagogical one, where rendering an abstract theory in physics (such as electricity) in concrete commonsensical terms (such as water flow) is an effective strategy to enhance the students’ understanding of the former (Gentner and Gentner, 1983). The measure of success for a given version of the frame is its ability to sway the public in the evaluative direction envisioned by the author by providing sufficient educational benefit, so-to-speak, that is, convincingly rendering a good portion of a complex reality in accessible terms. Once a frame is found that provides extensive education benefit, such as the EUROPEAN INTEGRATION AS TRAIN JOURNEY above, a politician’s attempt to debunk a metaphor as inappropriate risk public antagonism, as this would be akin to taking the benefit of enhanced understanding away. Thus, rather than contesting the validity of the metaphoric frame, politicians strive to find a way to turn the metaphor around, i.e. accept the general framework, but focus on a previously unexplored aspect that would lead to a different evaluative tilt. Our results show that being the first to use an effective metaphor that manages to lock the public in its framework is a strategic advantage as the need to communicate with the same public would compel the rival to take up the metaphor of your choice. To our knowledge, this is the first explanation of the use of extended metaphor in political communication on a complex issue in terms of the agendas of the rival parties and the changing disposition of the public being addressed. It is an open question whether similar “locking in” of the public can be attained by non-metaphorical means, and whether the ensuing dynamics would be similar. 4.3 Social dynamics This article contributes to the growing literature on modeling social linguistic behavior, like debates (Somasundaran and Wiebe, 2009), dating (Jurafsky et al., 2009; Ranganath et al., 2009), collaborative authoring and editing in wikis (Leuf and Cunningham, 2001) such as Wikipedia (Vuong et al., 2008; Kittur et al., 2007; Vi´egas et al., 2004). The latter literature in particular sees the social activity as an unfolding process, for example, detecting the onset and resolution of a controversy over the content of a Wikipedia article through tracking article talk7 and deletion-and-reversion patterns. Somewhat similarly to the metaphor debate discussed in this article, Vi´egas et al. (2004) note first-mover advantage in Wikipedia authoring, that is, the first version gives the tone for the subsequent edits and has its parts survive for relatively many editing cycles. Finding out how the initial contribution constrains and guides subsequent edits of the content of a Wikipedia article and what kind of argumentative strategies are employed in persuading others to retain one’s contribution is an interesting direction for future research. A number of recent studies of the linguistic aspects of social processes are construed as if the 7a page separate from the main article that is devoted to the discussion of the edits 703 events are taking place all-at-once — there is no differentiation between early and later stages of a debate in Somasundaran and Wiebe (2009) or initial and subsequent speed-dates for the same subject in Jurafsky et al. (2009). Yet adopting a dynamic perspective stands to reason in such cases. For example, Somasundaran and Wiebe (2009) built a system for recognizing stance in an online debate (such as pro-iPhone or pro-Blackberry on http://www.covinceme.net). They noticed that the task was complicated by concessions — acknowledgments of some virtues of the competitor before stating own preference. This is quite possibly an instance of debate dynamics whereby as the debate evolves certain common ground emerges between the sides and the focus of the debate changes from the initial stage of elucidating which features are better in which product to a stage where the “facts” are settled and acknowledged by both sides and the debate moves to evaluation of the relative importance of those features. As another example, consider the construction of statistical models of various emotional and personality traits based on a corpus of speed dates such as Jurafsky et al. (2009). Take the trait of intelligence. In their experiment with speed-dates, Fisman et al. (2006) found that males tend to disprefer females they perceive as more intelligent or ambitious than themselves. Consequently, an intelligent female might choose to act less intelligent in later rounds of speed dating if she has not so far met a sufficiently intelligent male, assuming she prefers a less-intelligent male to no match at all. Better sensitivity to the dynamics of social processes underlying the observed linguistic communication will we believe result in increased interest in game-theoretic models, as these are especially well suited to handle cases where the sides have certain goals and adapt their moves based on the current situations, the other side’s move, and possibly other considerations, such as the need to address effectively a wider audience, beyond the specific interlocutors. A game theoretic explanation advances the understanding of the process being modeled, and hence of the applicability, and the potential adaptation, of statistical models developed on a certain dataset to situations that differ somewhat from the original data: For example, a corpus with more rounds of speed-dates per participant might suddenly make females seem smarter, or a debate with a longer history would feature more, and perhaps more elaborate, concessions. 5 Empirical challenges We suggested that models of dynamics such as the one presented in this article be built over data where entities of interest are clearly identified. This article is based on chapters 1 and 2 of the book by Musolff (2000) which itself is informed by a corpus-linguistic analysis of metaphor in media discourse in Britain and Germany. We now discuss the state of affairs in empirical approaches to detecting metaphors. 5.1 Metaphors in NLP Metaphors received increasing attention from computational linguistics community in the last two decades. The tasks that have been addressed are explication of the reasoning behind the metaphor (Barnden et al., 2002; Narayanan, 1999; Hobbs, 1992); detection of conventional metaphors between two specific domains (Mason, 2004); classification of words, phrases or sentences as metaphoric or non-metaphoric (Krishnakumaran and Zhu, 2007; Birke and Sarkar, 2006; Gedigian et al., 2006; Fass, 1991). We are not aware of research on automatic methods specifically geared to recognition of extended metaphors. Indeed, most computational work cited above concentrates on the detection of a local incongruity due to a violation of selectional restrictions when the verb or one of its arguments is used metaphorically (as in Protesters derailed the conference). Extended metaphors are expected to be difficult for such approaches, since many of the clauses are completely situated in the source domain and hence no local incongruities exist (see examples on the first page of this article). 5.2 Data collection Supervised approaches to metaphor detection need to rely on annotated data. While metaphors are ubiquitous in language, an annotation project that seeks to narrow the scope of relevant metaphors down to metaphors from a particular source domain (such as train journeys) that describe a particular target domain (such as European integration) and are uttered by certain entities (such as senior UK politicians) face the problem of sparsity of the relevant data in the larger discourse: A random sample of the size amenable to human an704 notation is unlikely to capture in sufficient detail material pertaining to the one metaphor of interest. To increase the likelihood of finding mentions of the source domain, a lexicon of words from the source domain can be used to select documents (Hardie et al., 2007; Gedigian et al., 2006). Another approach is metaphor “harvesting” – hypothesizing that metaphors of interest would occur in close proximity to lexical items representing the target domain of the metaphor, such as the 4 word window around the lemma Europe used in Reining and L¨onneker-Rodman (2007). 5.3 Data annotation A further challenge is producing reliable annotations. Pragglejaz (2007) propose a methodology for testing metaphoricity of a word in discourse and report κ=0.56-0.70 agreement for a group of six highly expert annotators. Beigman Klebanov et al. (2008) report κ=0.66 for detecting paragraphs containing metaphors from the source domains LOVE and VEHICLE with multiple non-expert annotators, though other source domains that often feature highly conventionalized metaphors (like structure or foundation from BUILDLING domain) or are more abstract and difficult to delimit (such as AUTHORITY) present a more challenging annotation task. 5.4 Measuring metaphors A fully empirical basis for the kind of model presented in this paper would also involve defining a metric on metaphors that would allow measuring the frame chosen by the given version of the metaphor relatively to other such frames – that is, quantifying which part of the “integration is a train journey” metaphor is covered by those states of affairs that also fit Thatcher’s critical rendition. 6 Conclusion This article addressed a specific communicative setting (rival politicians trying to “sell” to the public their versions of the unfolding realities and necessary policies) and a specific linguistic tool (an extended metaphor), showing that the particular use made of metaphor in such setting can be rationalized based on the characteristics of the setting. Various questions now arise. Given the central role played by the public gratification constraint in our model, would conversational situations without the need to persuade the public, such as meetings of small groups of peers or phone conversations between friends, tend less to the use of extended metaphor? Conversely, does the use of extended metaphor in other settings testify to the existence of presumed onlookers who need to be “captured” in a particular version of reality — as in pedagogic or poetic context? Considerations of the participants’ agendas and their impact on the ensuing dynamics of the exchange would we believe lead to further interest in game theoretic models when addressing complex social dynamics in situations like collaborative authoring, debates, or dating, and will augment the existing mostly statistical approaches with a broader picture of the relevant communication. A Proof of Existence of a Subgame Perfect Equilibrium For a segment [a, b] and a≤v1<v2≤b let U1(x)=φ(||x −v1||) and U2(x)=φ(||x −v2||) be utility functions with peaks v1 and v2, respectively. For a history h={F0, . . . , Ft} where Ft=[lt, rt], let σ∗ 1(h), player 1’s move, be defined as choosing Ft+1=[lt+1, rt+1] such that |Ft+1|=|Ft| 2 , and rt+1 is as close as possible to v1. σ∗ 2 sets lt+1 with respect to v2 in a symmetric fashion. Since Ft shrinks by half every round, limt→∞lt=limt→∞rt=x∗, converging to a point. We now show (σ∗ 1, σ∗ 2) is an equilibrium by showing that neither player has a profitable deviation. Notice that after the first round the subgame is identical to the initial game with F1 replacing F0, and the roles of players reversed. Player 2 had no influence on the choice of F1, hence she has a profitable deviation iff she has a profitable deviation on the continuation subgame where she is the first mover. It thus suffices to show that the first mover (player 1) has no profitable deviations to establish that (σ∗ 1, σ∗ 2) is an equilibrium. Since by definition σ∗ 2 always chooses an entropy maximizing segment, for player 1 to choose a non-entropy maximizing segment (more or less than half the length) amounts to yielding the round to player 2, which is equivalent in terms of the resulting accepted frame to a situation where player 1 chooses an entropy maximizing segment – the same one chosen by player 2. Thus we need to consider only deviations with entropy maximizing frames. Step 1: Suppose σ′ 1 is a strategy of player 1 and let F ′ 0, F ′ 1, F ′ 2, . . . be the sequence of frames on 705 the path corresponding to the pair (σ′ 1, σ∗ 2). Let t0 be the first move deviating from the equilibrium path, namely Ft0̸=F ′ t0. We first show that Ft0−1 could not be (a) completely to the left of v1 or (b) completely to the right of v2. Suppose (a) holds. Then by definition rt0−2=rt0−1<v1, and, inductively, r0=rt0−1<v1; this contradicts r0=1 that follows from F0=[0, 1]. Possibility (b) is similarly refuted. Therefore, the only two cases for Ft0−1 with respect to v1 are depicted in figure 4. Note that this implies v1≤x∗≤v2. !"# !$# Case 2: Case 1: Ft0−1 Ft0−1 rt0 Figure 4: Two cases of current frame location. Step 2: In case 1, σ∗ 1 will choose frames of type [lt, v1] for any t≥t0, and σ∗ 2 will do the same on any history in the continuation game, hence the outcome will eventually be v1. As this is player 1’s peak utility point, she has no profitable deviation. Step 3: In case 2, Ft0 is the leftmost entropy maximizing subsegment of Ft0−1 and the deviation F ′ t0 can only be a shift to the right namely r′ t0≥rt0. If player 2 could choose [v2, rt0+1] given rt0, she can still choose the same frame given r′ t0, so the outcome would be v2 and F ′ t0 was not profitable. If player 2 could not choose [v2, rt0+1] given rt0, implying that x∗<v2, but as a result of the deviation can now choose [v2, r′ t0+1], implying that the outcome would be v2, clearly player 1 has not benefited from the deviation since U1 is descending right of v1. If player 2 still cannot choose [v2, r′ t0+1] after the deviation, she would choose the rightmost entropy maximizing segment with l′ t0+1≥lt0+1. If this still allows player 1 to do [l′ t0+2, v1] and hence to lead to v1 as the outcome, it was possible in [lt0+2, v1] as well, so no profit is gained by having deviated. Otherwise, r′ t0+2≥rt0+2. Step 3 can be repeated ad infinitum to show that r′ t≥rt unless for some history h the deviation enables σ2(h)=[v2, r′ t]. In the former case we get limt→∞r′ t=x′≥x∗=limt→∞rt where ∩∞ t=1F ′ t={x′}. Since r′ t and rt are to the right of v1 and U1 is descending right of v1 it follows that U1(x∗)≥U1(x′). In the latter case x′≥v2. Since Ft is never strictly to the right of v2, x∗=limt→∞lt≤v2≤x′, therefore U1(x∗)≥U1(x′). In either case the deviation σ′ 1 cannot result in a better outcome for player 1. This finishes the proof that (σ∗ 1, σ∗ 2) is a Nash equilibrium. Notice that (σ∗ 1, σ∗ 2) prescribe sub-strategies on any subgame that are themselves Nash equilibria for the subgames, hence (σ∗ 1, σ∗ 2) is a subgame perfect equilibrium 2 First Mover’s Advantage: The proof of step 3 shows that having the left boundary of the current frame further to the right cannot yield a better outcome for player 1. Yet, if player 1’s first turn comes after that of player 2, she will start with a current frame with the left boundary further to the right than the initial frame before player 2 moved, since moving the left boundary is player 2’s equilibrium strategy. Hence a player would never achieve a better outcome starting second if both players are playing the canonical strategy. Centrist’s Advantage: Let M be the middle of F0. Consider a more extreme version of player 1 — player 1#. Suppose w.l.g. v# 1 <v1≤M. In case v# 1 <v1<v2, for all utilities u of the outcome of dynamics vs player 2, if player 1# could attain u, player 1 could attain u or more; the reverse is not true, for example when |v# 1 −lt|<|Ft| 2 ≤|v1 −lt| and player 1 (or 1#) is moving first. In case v2<v# 1 <v1, if player 1 (or 1#) moves first, she is able to force her peak point as the outcome. If v# 1 <v2<v1, player 1 can force v1 as the outcome, whereas player 1# would not necessarily be able to force v# 1 , as player 2 would pull the outcome towards v2. Hence a first moving centrist is never worse off, and often better off, than a first moving extremist. References James Allan, editor. 2002. Topic Detection and Tracking: Event-Based Information Organization. Norwell, MA:Kluwer Academic Publishers. Robert Aumann. 1990. Nash Equilibria are not SelfEnforcing. In Jean J. Gabszewicz, Jean-Francois Richard, and Laurence A. Wolsey, editors, Economic Decision-Making: Games, Econometrics and Optimisation, pages 201–206. Amsterdam: Elsevier. Robert Axelrod and William D. Hamilton. 1981. The evolution of cooperation. Science, 211(4489):1390– 1396. John A. Barnden, Sheila R. Glasbey, Mark G. Lee, and Alan M. Wallington. 2002. Reasoning in metaphor 706 understanding: The ATT-Meta approach and system. In Proceedings of COLING, pages 121–128. Beata Beigman Klebanov, Eyal Beigman, and Daniel Diermeier. 2008. Analyzing disagreements. In Ron Artstein, Gemma Boleda, Frank Keller, and Sabine Schulte im Walde, editors, Proceedings of COLING Workshop on Human Judgments in Computational Linguistics, pages 2–7, Manchester, UK, August. International Committee on Computational Linguistics. Julia Birke and Anoop Sarkar. 2006. A clustering approach for nearly unsupervised recognition of nonliteral language. In Proceedings of EACL, pages 329–336. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Resarch, 3:993–1022. Dwight Bolinger. 1980. Language – The Loaded Weapon. London: Longman. Robin Clark and Prashant Parikh. 2007. Game Theory and Discourse Anaphora. Journal of Logic, Language and Information, 16:265–282. Robert Dale and Ehud Reiter. 1995. Computational interpretations of the Gricean maxims in the generation of referring expressions. Cognitive Science, 18:233–263. Paul Dekker and Robert van Rooy. 2000. Bidirectional optimality theory: An application of game theory. Journal of Semantics, 17(3):217–242. Anthony Downs. 1957. An economic theory of political action in a democracy. The Journal of Political Economy, 65(2):135–150. James Druckman and Arthur Luria. 2000. Preference formation. Annual Review of Political Science, 2:1– 24. Robert M. Entman. 2003. Cascading activation: Contesting the White House’s frame after 9/11. Political Communication, 20:415–432. Dan Fass. 1991. Met*: a method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49–90. Raymond Fisman, Sheena Iyengar, Emir Kamenica, and Itamar Simonson. 2006. Gender Differences in Mate Selection: Evidence from a Speed Dating Experiment. Quarterly Journal of Economics, 121(2):673–697. Yoav Freund and Robert E. Schapire. 1996. Game theory, on-line prediction, and boosting. In Proceedings of the annual conference on Computational Learning Theory, pages 325–332, Desenzano del Garda, Italy, June -July. Claire Gardent, Hlne Manu´elian, Kristina Striegnitz, and Marilisa Amoia. 2004. Generating Definite Descriptions: Non-Incrementality, Inference and Data. In Thomas Pechmann and Christopher Habel, editors, Multidisciplinary Approaches to Language Production. Mouton de Gruyter. Matt Gedigian, John Bryant, Srini Narayanan, and Branimir Ciric. 2006. Catching metaphors. In Proceedings of NAACL Workshop on Scalable Natural Language Understanding, pages 41–48. Deidre Gentner and Donald Gentner. 1983. Flowing waters or teeming crowds: Mental models of electricity. In D. Gentner and A. Stevens, editors, Mental models. Hillsdale, NJ: Lawrence Erlbaum. Jacob Glazer and Ariel Rubinstein. 2001. Debates and decisions: On a rationale of argumentation rules. Games and Economic Behavior, 36(2):158–173. Stephan Greene and Philip Resnik. 2009. More than Words: Syntactic Packaging and Implicit Sentiment. In Proceedings of NAACL, pages 503–511, Boulder, CO, June. Avner Greif. 2006. Institutions and the path to the modern economy: Lessons from medieval trade. Cambridge University Press. Daniel Gruhl, R. Guha, David Liben-Nowell, and Andrew Tomkins. 2004. Information diffusion through blogspace. In Proceedings of the 13th international conference on World Wide Web, pages 491–501. Andrew Hardie, Veronika Koller, Paul Rayson, and Elena Semino. 2007. Exploiting a semantic annotation tool for metaphor analysis. In Proceedings of the Corpus Linguistics Conference, Birmingham, UK, Julyt. Jerry Hobbs. 1992. Metaphor and abduction. In Andrew Ortony, Jon Slack, and Oliviero Stock, editors, Communication from an Artificial Intelligence Perspective: Theoretical and Applied Issues, pages 35– 58. Springer Verlag. Gerhard J¨ager and Christian Ebert. 2008. Pragmatic Rationalizability. In Proceedings of the 13th annual meeting of Gesellschaft fur Semantik, Sinn und Bedeutung, pages 1–15, Stuttgart, Germany, September-October. Dan Jurafsky, Rajesh Ranganath, and Dan McFarland. 2009. Extracting social meaning: Identifying interactional style in spoken conversation. In Proceedings of NAACL, pages 638–646, Boulder, CO, June. Aniket Kittur, Bongwon Suh, Bryan A. Pendleton, and Ed H. Chi. 2007. He says, she says: Conflict and coordination in Wikipedia. In CHI-07: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 453–462, San Jose, CA, USA. 707 Saisuresh Krishnakumaran and Xiaojin Zhu. 2007. Hunting elusive metaphors using lexical resources. In Proceedings of NAACL Workshop on Computational Approaches to Figurative Language, pages 13–20. George Lakoff and Mark Johnson. 1980. Metaphors We Live By. Chicago University Press. George Lakoff. 1991. Metaphor and war: The metaphor system used to justify war in the Gulf. Peace Research, 23:25–32. Carlton E. Lemke and Joseph T. Howson. 1964. Equilibrium Points of Bimatrix Games. Journal of the Society for Industrial and Applied Mathematics, 12(2):413–423. Bo Leuf and Ward Cunningham. 2001. The Wiki way: quick collaboration on the Web. Boston: AddisonWesley. David Lewis. 1969. Convention. Cambridge, MA: Harvard University Press. Robert D. Luce and Howard Raiffa. 1957. Games and decisions. New York: John Wiley and Sons. Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green. 1995. Microeconomic theory. Oxford University Press. Zachary J. Mason. 2004. CorMet: a computational, corpus-based conventional metaphor extraction system. Computational Linguistics, 30(1):23–44. John Maynard Smith and George R. Price. 1973. The logic of animal conflict. Nature, 246(5427):15–18. Nelson Morgan, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Adam Janin, Thilo Pfau, Elizabeth Shriberg, and Andreas Stolcke. 2001. The Meeting Project at ICSI. In Proceedings of the HLT, pages 246–252, San Diego, CA. Andreas Musolff. 2000. Mirror images of Europe: Metaphors in the public debate about Europe in Britain and Germany. M¨unchen: Iudicium. Srini Narayanan. 1999. Moving right along: A computational model of metaphoric reasoning about events. In Proceedings of AAAI, pages 121–128. John F. Nash. 1950. Equilibrium points in n-person games. Proceedings of the National Academy of Sciences, 36(1):48–49. Douglass C. North. 1990. Institutions, institutional change, and economic performance. Cambridge University Press. Ivandr Paraboni, Kees van Deemter, and Judith Masthoff. 2007. Generating Referring Expressions: Making Referents Easy to Identify. Computational Lingusitics, 33(2):229–254. Prashant Parikh. 2001. The Use of Language. Stanford: CSLI Publications. Keith T. Poole and Howard Rosenthal. 1997. Congress: A Political-Economic History of Roll Call Voting. Oxford University Press. Group Pragglejaz. 2007. MIP: A Method for Identifying Metaphorically Used Words in Discourse. Metaphor and Symbol, 22(1):1–39. Kevin M. Quinn, Burt L. Monroe, Michael Colaresi, Michael H. Crespin, and Dragomir R. Radev. 2006. An automated method of topic-coding legislative speech over time with application to the 105th-108th U.S. Senate. Unpublished Manuscript. Rajesh Ranganath, Dan Jurafsky, and Dan McFarland. 2009. It’s not you, it’s me: Detecting flirting and its misperception in speed-dates. In Proceedings of EMNLP, pages 334–342, Singapore, August. Astrid Reining and Birte L¨onneker-Rodman. 2007. Corpus-driven metaphor harvesting. In Proceedings of the Workshop on Computational Approaches to Figurative Language, pages 5–12, Rochester, New York. Ian Ross. 2007. Situations and Solution Concepts in Game-Theoretic Approaches to Pragmatics. In Ahti-Veikko Pietarinen, editor, Game Theory and Linguistic Meaning, pages 135–147. Oxford, UK: Elsevier Ltd. Ariel Rubinstein. 1982. Perfect equilibrium in a bargaining model. Econometrica, 50(1):97–109. Thomas C. Schelling. 1997. The strategy of conflict. Harvard University Press. Reinhard Selten. 1965. Spieltheoretische behandlung eines oligopolmodells mit nachfragetr¨agheit. Zeitschrift f¨ur die Gesamte Staatswissenschaft, 12:301–324. Reinhard Selten. 1975. Re-examination of the Perfectness Concept for Equilibrium Points in Extensive Form Games. International Journal of Game Theory, 4:25–55. Shai Shalev-Shwartz and Yoram Singer. 2006. Convex Repeated Games and Fenchel Duality. In Proceedings of NIPS, pages 1265–1272. Advaith Siddharthan and Ann Copestake. 2004. Generating referring expressions in open domains. In Proceedings of the ACL, pages 407–414, Barcelona, Spain, July. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing Stances in Online Debates. In Proceedings of the ACL, pages 226–234. Robert van Rooij and Katrin Schulz. 2004. Exhaustive Interpretation of Complex Sentences. Journal of Logic, Language and Information, 13(4):491–519. 708 Robert van Rooij. 2008. Games and Quantity implicatures. Journal of Economic Methodology, 15(3):261–274. Fernanda B. Vi´egas, Martin Wattenberg, and Kushal Dave. 2004. Studying cooperation and conflict between authors with history flow visualizations. In CHI-04: Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pages 575– 582, Vienna, Austria. John von Neumann and Oskar Morgenstern. 1944. Theory of games and economic behavior. Princeton University Press. Bernhard von Stengel. 2007. Equilibrium computation for two-player games in strategic and extensive form. In Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay Vazirani, editors, Algorithmic Game Theory, pages 53–78. Cambridge University Press. Ba-Quy Vuong, Ee-Peng Lim, Aixin Sun, Minh-Tam Le, and Hady Wirawan Lauw. 2008. On ranking controversies in Wikipedia: models and evaluation. In Proceedings of the international conference on Web Search and Web Data Mining, pages 171–182, Palo Alto, CA, USA. 709
2010
72
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 710–719, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Kernel Based Discourse Relation Recognition with Temporal Ordering Information WenTing Wang1 Jian Su1 Chew Lim Tan2 1Institute for Infocomm Research 1 Fusionopolis Way, #21-01 Connexis Singapore 138632 {wwang,sujian}@i2r.a-star.edu.sg 2Department of Computer Science University of Singapore Singapore 117417 [email protected] Abstract Syntactic knowledge is important for discourse relation recognition. Yet only heuristically selected flat paths and 2-level production rules have been used to incorporate such information so far. In this paper we propose using tree kernel based approach to automatically mine the syntactic information from the parse trees for discourse analysis, applying kernel function to the tree structures directly. These structural syntactic features, together with other normal flat features are incorporated into our composite kernel to capture diverse knowledge for simultaneous discourse identification and classification for both explicit and implicit relations. The experiment shows tree kernel approach is able to give statistical significant improvements over flat syntactic path feature. We also illustrate that tree kernel approach covers more structure information than the production rules, which allows tree kernel to further incorporate information from a higher dimension space for possible better discrimination. Besides, we further propose to leverage on temporal ordering information to constrain the interpretation of discourse relation, which also demonstrate statistical significant improvements for discourse relation recognition on PDTB 2.0 for both explicit and implicit as well. 1 Introduction Discourse relations capture the internal structure and logical relationship of coherent text, including Temporal, Causal and Contrastive relations etc. The ability of recognizing such relations between text units including identifying and classifying provides important information to other natural language processing systems, such as language generation, document summarization, and question answering. For example, Causal relation can be used to answer more sophisticated, non-factoid ‘Why’ questions. Lee et al. (2006) demonstrates that modeling discourse structure requires prior linguistic analysis on syntax. This shows the importance of syntactic knowledge to discourse analysis. However, most of previous work only deploys lexical and semantic features (Marcu and Echihabi, 2002; Pettibone and PonBarry, 2003; Saito et al., 2006; Ben and James, 2007; Lin et al., 2009; Pitler et al., 2009) with only two exceptions (Ben and James, 2007; Lin et al., 2009). Nevertheless, Ben and James (2007) only uses flat syntactic path connecting connective and arguments in the parse tree. The hierarchical structured information in the trees is not well preserved in their flat syntactic path features. Besides, such a syntactic feature selected and defined according to linguistic intuition has its limitation, as it remains unclear what kinds of syntactic heuristics are effective for discourse analysis. The more recent work from Lin et al. (2009) uses 2-level production rules to represent parse tree information. Yet it doesn’t cover all the other sub-trees structural information which can be also useful for the recognition. In this paper we propose using tree kernel based method to automatically mine the syntactic 710 information from the parse trees for discourse analysis, applying kernel function to the parse tree structures directly. These structural syntactic features, together with other flat features are then incorporated into our composite kernel to capture diverse knowledge for simultaneous discourse identification and classification. The experiment shows that tree kernel is able to effectively incorporate syntactic structural information and produce statistical significant improvements over flat syntactic path feature for the recognition of both explicit and implicit relation in Penn Discourse Treebank (PDTB; Prasad et al., 2008). We also illustrate that tree kernel approach covers more structure information than the production rules, which allows tree kernel to further work on a higher dimensional space for possible better discrimination. Besides, inspired by the linguistic study on tense and discourse anaphor (Webber, 1988), we further propose to incorporate temporal ordering information to constrain the interpretation of discourse relation, which also demonstrates statistical significant improvements for discourse relation recognition on PDTB v2.0 for both explicit and implicit relations. The organization of the rest of the paper is as follows. We briefly introduce PDTB in Section 2. Section 3 gives the related work on tree kernel approach in NLP and its difference with production rules, and also linguistic study on tense and discourse anaphor. Section 4 introduces the frame work for discourse recognition, as well as the baseline feature space and the SVM classifier. We present our kernel-based method in Section 5, and the usage of temporal ordering feature in Section 6. Section 7 shows the experiments and discussions. We conclude our works in Section 8. 2 Penn Discourse Tree Bank The Penn Discourse Treebank (PDTB) is the largest available annotated corpora of discourse relations (Prasad et al., 2008) over 2,312 Wall Street Journal articles. The PDTB models discourse relation in the predicate-argument view, where a discourse connective (e.g., but) is treated as a predicate taking two text spans as its arguments. The argument that the discourse connective syntactically bounds to is called Arg2, and the other argument is called Arg1. The PDTB provides annotations for both explicit and implicit discourse relations. An explicit relation is triggered by an explicit connective. Example (1) shows an explicit Contrast relation signaled by the discourse connective ‘but’. (1). Arg1. Yesterday, the retailing and financial services giant reported a 16% drop in third-quarter earnings to $257.5 million, or 75 cents a share, from a restated $305 million, or 80 cents a share, a year earlier. Arg2. But the news was even worse for Sears's core U.S. retailing operation, the largest in the nation. In the PDTB, local implicit relations are also annotated. The annotators insert a connective expression that best conveys the inferred implicit relation between adjacent sentences within the same paragraph. In Example (2), the annotators select ‘because’ as the most appropriate connective to express the inferred Causal relation between the sentences. There is one special label AltLex pre-defined for cases where the insertion of an Implicit connective to express an inferred relation led to a redundancy in the expression of the relation. In Example (3), the Causal relation derived between sentences is alternatively lexicalized by some non-connective expression shown in square brackets, so no implicit connective is inserted. In our experiments, we treat AltLex Relations the same way as normal Implicit relations. (2). Arg1. Some have raised their cash positions to record levels. Arg2. Implicit = Because High cash positions help buffer a fund when the market falls. (3). Arg1. Ms. Bartlett’s previous work, which earned her an international reputation in the non-horticultural art world, often took gardens as its nominal subject. Arg2. [Mayhap this metaphorical connection made] the BPC Fine Arts Committee think she had a literal green thumb. The PDTB also captures two non-implicit cases: (a) Entity relation where the relation between adjacent sentences is based on entity coherence (Knott et al., 2001) as in Example (4); and (b) No relation where no discourse or entity-based coherence relation can be inferred between adjacent sentences. 711 (4). But for South Garden, the grid was to be a 3-D network of masonry or hedge walls with real plants inside them. In a Letter to the BPCA, kelly/varnell called this “arbitrary and amateurish.” Each Explicit, Implicit and AltLex relation is annotated with a sense. The senses in PDTB are arranged in a three-level hierarchy. The top level has four tags representing four major semantic classes: Temporal, Contingency, Comparison and Expansion. For each class, a second level of types is defined to further refine the semantic of the class levels. For example, Contingency has two types Cause and Condition. A third level of subtype specifies the semantic contribution of each argument. In our experiments, we use only the top level of the sense annotations. 3 Related Work Tree Kernel based Approach in NLP. While the feature based approach may not be able to fully utilize the syntactic information in a parse tree, an alternative to the feature-based methods, tree kernel methods (Haussler, 1999) have been proposed to implicitly explore features in a high dimensional space by employing a kernel function to calculate the similarity between two objects directly. In particular, the kernel methods could be very effective at reducing the burden of feature engineering for structured objects in NLP research (Culotta and Sorensen, 2004). This is because a kernel can measure the similarity between two discrete structured objects by directly using the original representation of the objects instead of explicitly enumerating their features. Indeed, using kernel methods to mine structural knowledge has shown success in some NLP applications like parsing (Collins and Duffy, 2001; Moschitti, 2004) and relation extraction (Zelenko et al., 2003; Zhang et al., 2006). However, to our knowledge, the application of such a technique to discourse relation recognition still remains unexplored. Lin et al. (2009) has explored the 2-level production rules for discourse analysis. However, Figure 1 shows that only 2-level sub-tree structures (e.g. 𝑇𝑎- 𝑇𝑒) are covered in production rules. Other sub-trees beyond 2-level (e.g. 𝑇𝑓- 𝑇𝑗) are only captured in the tree kernel, which allows tree kernel to further leverage on information from higher dimension space for possible better discrimination. Especially, when there are enough training data, this is similar to the study on language modeling that N-gram beyond unigram and bigram further improves the performance in large corpus. Tense and Temporal Ordering Information. Linguistic studies (Webber, 1988) show that a tensed clause 𝐶𝑏 provides two pieces of semantic information: (a) a description of an event (or situation) 𝐸𝑏; and (b) a particular configuration of the point of event (𝐸𝑇), the point of reference (𝑅𝑇) and the point of speech (𝑆𝑇). Both the characteristics of 𝐸𝑏 and the configuration of 𝐸𝑇, 𝑅𝑇 and 𝑆𝑇 are critical to interpret the relationship of event 𝐸𝑏 with other events in the discourse model. Our observation on temporal ordering information is in line with the above, which is also incorporated in our discourse analyzer. 4 The Recognition Framework In the learning framework, a training or testing instance is formed by a non-overlapping clause(s)/sentence(s) pair. Specifically, since implicit relations in PDTB are defined to be local, only clauses from adjacent sentences are paired for implicit cases. During training, for each discourse relation encountered, a positive instance is created by pairing the two arguments. Also a Figure 1. Different sub-tree sets for 𝑇1 used by 2-level production rules and convolution tree kernel approaches. 𝑇𝑎-𝑇𝑗 and 𝑇1 itself are covered by tree kernel, while only 𝑇𝑎-𝑇𝑒 are covered by production rules. Decomposition C E G F H A B D (𝑇1) A B C (𝑇𝑎) D F E (𝑇𝑏) C D (𝑇𝑐) E G (𝑇𝑑) F H (𝑇𝑒) D E G F H (𝑇𝑓) (𝑇𝑔) A C D B D E G F H C (𝑇𝑗) C (𝑇𝑕) D F E (𝑇𝑖) A C D B F E 712 set of negative instances is formed by paring each argument with neighboring non-argument clauses or sentences. Based on the training instances, a binary classifier is generated for each type using a particular learning algorithm. During resolution, (a) clauses within same sentence and sentences within three-sentence spans are paired to form an explicit testing instance; and (b) neighboring sentences within three-sentence spans are paired to form an implicit testing instance. The instance is presented to each explicit or implicit relation classifier which then returns a class label with a confidence value indicating the likelihood that the candidate pair holds a particular discourse relation. The relation with the highest confidence value will be assigned to the pair. 4.1 Base Features In our system, the base features adopted include lexical pair, distance and attribution etc. as listed in Table 1. All these base features have been proved effective for discourse analysis in previous work. 4.2 Support Vector Machine In theory, any discriminative learning algorithm is applicable to learn the classifier for discourse analysis. In our study, we use Support Vector Machine (Vapnik, 1995) to allow the use of kernels to incorporate the structure feature. Suppose the training set 𝑆 consists of labeled vectors { 𝑥𝑖, 𝑦𝑖 }, where 𝑥𝑖 is the feature vector of a training instance and 𝑦𝑖 is its class label. The classifier learned by SVM is: 𝑓 𝑥 = 𝑠𝑔𝑛 𝑦𝑖𝑎𝑖𝑥∗𝑥𝑖+ 𝑏 𝑖=1 where 𝑎𝑖 is the learned parameter for a feature vector 𝑥𝑖, and 𝑏 is another parameter which can be derived from 𝑎𝑖 . A testing instance 𝑥 is classified as positive if 𝑓 𝑥 > 01. One advantage of SVM is that we can use tree kernel approach to capture syntactic parse tree information in a particular high-dimension space. In the next section, we will discuss how to use kernel to incorporate the more complex structure feature. 5 Incorporating Structural Syntactic Information A parse tree that covers both discourse arguments could provide us much syntactic information related to the pair. Both the syntactic flat path connecting connective and arguments and the 2-level production rules in the parse tree used in previous study can be directly described by the tree structure. Other syntactic knowledge that may be helpful for discourse resolution could also be implicitly represented in the tree. Therefore, by comparing the common sub-structures between two trees we can find out to which level two trees contain similar syntactic information, which can be done using a convolution tree kernel. The value returned from the tree kernel reflects the similarity between two instances in syntax. Such syntactic similarity can be further combined with other flat linguistic features to compute the overall similarity between two instances through a composite kernel. And thus an SVM classifier can be learned and then used for recognition. 5.1 Structural Syntactic Feature Parsing is a sentence level processing. However, in many cases two discourse arguments do not occur in the same sentence. To present their syntactic properties and relations in a single tree structure, we construct a syntax tree for each paragraph by attaching the parsing trees of all its sentences to an upper paragraph node. In this paper, we only consider discourse relations within 3 sentences, which only occur within each pa 1 In our task, the result of 𝑓 𝑥 is used as the confidence value of the candidate argument pair 𝑥 to hold a particular discourse relation. Feature Names Description (F1) cue phrase (F2) neighboring punctuation (F3) position of connective if presents (F4) extents of arguments (F5) relative order of arguments (F6) distance between arguments (F7) grammatical role of arguments (F8) lexical pairs (F9) attribution Table 1. Base Feature Set 713 ragraph, thus paragraph parse trees are sufficient. Our 3-sentence spans cover 95% discourse relation cases in PDTB v2.0. Having obtained the parse tree of a paragraph, we shall consider how to select the appropriate portion of the tree as the structured feature for a given instance. As each instance is related to two arguments, the structured feature at least should be able to cover both of these two arguments. Generally, the more substructure of the tree is included, the more syntactic information would be provided, but at the same time the more noisy information would likely be introduced. In our study, we examine three structured features that contain different substructures of the paragraph parse tree: Min-Expansion This feature records the minimal structure covering both arguments and connective word in the parse tree. It only includes the nodes occurring in the shortest path connecting Arg1, Arg2 and connective, via the nearest commonly commanding node. For example, considering Example (5), Figure 2 illustrates the representation of the structured feature for this relation instance. Note that the two clauses underlined with dashed lines are attributions which are not part of the relation. (5). Arg1. Suppression of the book, Judge Oakes observed, would operate as a prior restraint and thus involve the First Amendment. Arg2. Moreover, and here Judge Oakes went to the heart of the question, “Responsible biographers and historians constantly use primary sources, letters, diaries and memoranda.” Simple-Expansion Min-Expansion could, to some degree, describe the syntactic relationships between the connective and arguments. However, the syntactic properties of the argument pair might not be captured, because the tree structure surrounding the argument is not taken into consideration. To incorporate such information, Simple-Expansion not only contains all the nodes in Min-Expansion, but also includes the first-level children of these nodes2. Figure 3 illustrates such a feature for Example (5). We can see that the nodes “PRN” in both sentences are included in the feature. Full-Expansion This feature focuses on the tree structure between two arguments. It not only includes all the nodes in SimpleExpansion, but also the nodes (beneath the nearest commanding parent) that cover the words between the two arguments. Such a feature keeps the most information related to the argument pair. Figure 4 2 We will not expand the nodes denoting the sentences other than where the arguments occur. Figure 2. Min-Expansion tree built from golden standard parse tree for the explicit discourse relation in Example (5). Note that to distinguish from other words, we explicitly mark up in the structured feature the arguments and connective, by appending a string tag “Arg1”, “Arg2” and “Connective” respectively. Figure 3. Simple-Expansion tree for the explicit discourse relation in Example (5). 714 shows the structure for feature FullExpansion of Example (5). As illustrated, different from in Simple-Expansion, each sub-tree of “PRN” in each sentence is fully expanded and all its children nodes are included in Full-Expansion. 5.2 Convolution Parse Tree Kernel Given the parse tree defined above, we use the same convolution tree kernel as described in (Collins and Duffy, 2002) and (Moschitti, 2004). In general, we can represent a parse tree 𝑇 by a vector of integer counts of each sub-tree type (regardless of its ancestors): ∅ 𝑇 = (#𝑜𝑓 𝑠𝑢𝑏𝑡𝑟𝑒𝑒𝑠 𝑜𝑓 𝑡𝑦𝑝𝑒 1, … , # 𝑜𝑓 𝑠𝑢𝑏𝑡𝑟𝑒𝑒𝑠 𝑜𝑓𝑡𝑦𝑝𝑒 𝐼, … , # 𝑜𝑓 𝑠𝑢𝑏𝑡𝑟𝑒𝑒𝑠 𝑜𝑓 𝑡𝑦𝑝𝑒 𝑛). This results in a very high dimensionality since the number of different sub-trees is exponential in its size. Thus, it is computational infeasible to directly use the feature vector ∅(𝑇). To solve the computational issue, a tree kernel function is introduced to calculate the dot product between the above high dimensional vectors efficiently. Given two tree segments 𝑇1 and 𝑇2, the tree kernel function is defined: 𝐾 𝑇1, 𝑇2 = < ∅ 𝑇1 , ∅ 𝑇2 > = ∅ 𝑇1 𝑖 , ∅ 𝑇2 [𝑖] 𝑖 = 𝐼𝑖 𝑛1 ∗𝐼𝑖(𝑛2) 𝑖 𝑛2∈𝑁2 𝑛1∈𝑁1 where 𝑁1and 𝑁2 are the sets of all nodes in trees 𝑇1and 𝑇2, respectively; and 𝐼𝑖(𝑛) is the indicator function that is 1 iff a subtree of type 𝑖 occurs with root at node 𝑛 or zero otherwise. (Collins and Duffy, 2002) shows that 𝐾(𝑇1, 𝑇2) is an instance of convolution kernels over tree structures, and can be computed in 𝑂( 𝑁1 , 𝑁2 ) by the following recursive definitions: ∆ 𝑛1, 𝑛2 = 𝐼𝑖 𝑛1 ∗𝐼𝑖(𝑛2) 𝑖 (1) ∆ 𝑛1, 𝑛2 = 0 if 𝑛1 and 𝑛2 do not have the same syntactic tag or their children are different; (2) else if both 𝑛1 and 𝑛2 are pre-terminals (i.e. POS tags), ∆ 𝑛1, 𝑛2 = 1 × 𝜆; (3) else, ∆ 𝑛1, 𝑛2 = 𝜆 (1 + ∆(𝑐𝑕( 𝑛𝑐(𝑛1) 𝑗=1 𝑛1, 𝑗), 𝑐𝑕(𝑛2, 𝑗))), where 𝑛𝑐(𝑛1) is the number of the children of 𝑛1 , 𝑐𝑕(𝑛, 𝑗) is the 𝑗𝑡𝑕 child of node 𝑛 and 𝜆 (0 < 𝜆< 1) is the decay factor in order to make the kernel value less variable with respect to the sub-tree sizes. In addition, the recursive rule (3) holds because given two nodes with the same children, one can construct common sub-trees using these children and common sub-trees of further offspring. The parse tree kernel counts the number of common sub-trees as the syntactic similarity measure between two instances. The time complexity for computing this kernel is 𝑂( 𝑁1 ∙ 𝑁2 ). 5.3 Composite Tree Kernel Besides the above convolution parse tree kernel 𝐾 𝑡𝑟𝑒𝑒 𝑥1, 𝑥2 = 𝐾(𝑇1, 𝑇2) defined to capture the syntactic information between two instances 𝑥1 and 𝑥2, we also use another kernel 𝐾 𝑓𝑙𝑎𝑡 to capture other flat features, such as base features (described in Table 1) and temporal ordering information (described in Section 6). In our study, the composite kernel is defined in the following way: 𝐾 1 𝑥1, 𝑥2 = 𝛼∙𝐾 𝑓𝑙𝑎𝑡 𝑥1, 𝑥2 + 1 −𝛼 ∙𝐾 𝑡𝑟𝑒𝑒 𝑥1, 𝑥2 . Here, 𝐾 (∙,∙) can be normalized by 𝐾 𝑦, 𝑧 = 𝐾 𝑦, 𝑧 𝐾 𝑦, 𝑦 ∙𝐾 𝑧, 𝑧 and 𝛼is the coefficient. 6 Using Temporal Ordering Information In our discourse analyzer, we also add in temporal information to be used as features to predict discourse relations. This is because both our observations and some linguistic studies (Webber, 1988) show that temporal ordering information including tense, aspectual and event orders between two arguments may constrain the discourse relation type. For example, the connective Figure 4. Full-Expansion tree for the explicit discourse relation in Example (5). 715 word is the same in both Example (6) and (7), but the tense shift from progressive form in clause 6.a to simple past form in clause 6.b, indicating that the twisting occurred during the state of running the marathon, usually signals a temporal discourse relation; while in Example (7), both clauses are in past tense and it is marked as a Causal relation. (6). a. Yesterday Holly was running a marathon b. when she twisted her ankle. (7). a. Use of dispersants was approved b. when a test on the third day showed some positive results. Inspired by the linguistic model from Webber (1988) as described in Section 3, we explore the temporal order of events in two adjacent sentences for discourse relation interpretation. Here event is represented by the head of verb, and the temporal order refers to the logical occurrence (i.e. before/at/after) between events. For instance, the event ordering in Example (8) can be interpreted as: 𝐸𝑣𝑒𝑛𝑡 𝑏𝑟𝑜𝑘𝑒𝑛 ≺𝑏𝑒𝑓𝑜𝑟𝑒𝐸𝑣𝑒𝑛𝑡(𝑤𝑒𝑛𝑡) . 8. a. John went to the hospital. b. He had broken his ankle on a patch of ice. We notice that the feasible temporal order of events differs for different discourse relations. For example, in causal relations, cause event usually happens before effect event, i.e. 𝐸𝑣𝑒𝑛𝑡 𝑐𝑎𝑢𝑠𝑒 ≺𝑏𝑒𝑓𝑜𝑟𝑒𝐸𝑣𝑒𝑛𝑡(𝑒𝑓𝑓𝑒𝑐𝑡). So it is possible to infer a causal relation in Example (8) if and only if 8.b is taken to be the cause event and 8.a is taken to be the effect event. That is, 8.b is taken as happening prior to his going into hospital. In our experiments, we use the TARSQI3 system to identify event, analyze tense and aspectual information, and label the temporal order of events. Then the tense and temporal ordering information is extracted as features for discourse relation recognition. 3 http://www.isi.edu/tarsqi/ 7 Experiments and Results In this section we provide the results of a set of experiments focused on the task of simultaneous discourse identification and classification. 7.1 Experimental Settings We experiment on PDTB v2.0 corpus. Besides four top-level discourse relations, we also consider Entity and No relations described in Section 2. We directly use the golden standard parse trees in Penn TreeBank. We employ an SVM coreference resolver trained and tested on ACE 2005 with 79.5% Precision, 66.7% Recall and 72.5% F1 to label coreference mentions of the same named entity in an article. For learning, we use the binary SVMLight developed by (Joachims, 1998) and Tree Kernel Toolkits developed by (Moschitti, 2004). All classifiers are trained with default learning parameters. The performance is evaluated using Accuracy which is calculated as follow: 𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦= 𝑇𝑟𝑢𝑒𝑃𝑜𝑠𝑖𝑡𝑖𝑣𝑒+ 𝑇𝑟𝑢𝑒𝑁𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝐴𝑙𝑙 Sections 2-22 are used for training and Sections 23-24 for testing. In this paper, we only consider any non-overlapping clauses/sentences pair in 3-sentence spans. For training, there were 14812, 12843 and 4410 instances for Explicit, Implicit and Entity+No relations respectively; while for testing, the number was 1489, 1167 and 380. 7.2 System with Structural Kernel Table 2 lists the performance of simultaneous identification and classification on level-1 discourse senses. In the first row, only base features described in Section 4 are used. In the second row, we test Ben and James (2007)’s algorithm which uses heuristically defined syntactic paths and acts as a good baseline to compare with our learned-based approach using the structured information. The last three rows of Table 2 reports the results combining base features with three syntactic structured features (i.e. Min-Expansion, Simple-Expansion and Full-Expansion) described in Section 5. We can see that all our tree kernels outperform the manually constructed flat path feature in all three groups including Explicit only, Implicit only and All relations, with the accuracy increasing by 1.8%, 6.7% and 3.1% respectively. Especially, it shows that structural syntactic information is more helpful for Implicit cases which is generally much harder than Explicit cases. We 716 conduct chi square statistical significance test on All relations between flat path approach and Simple-Expansion approach, which shows the performance improvements are statistical significant (𝜌< 0.05) through incorporating tree kernel. This proves that structural syntactic information has good predication power for discourse analysis in both explicit and implicit relations. We also observe that among the three syntactic structured features, Min-Expansion and SimpleExpansion achieve similar performances which are better than the result for Full-Expansion. This may be due to that most significant information is with the arguments and the shortest path connecting connectives and arguments. However, Full-Expansion that includes more information in other branches may introduce too many details which are rather tangential to discourse recognition. Our subsequent reports will focus on Simple-Expansion, unless otherwise specified. As described in Section 5, to compute the structural information, parse trees for different sentences are connected to form a large tree for a paragraph. It would be interesting to find how the structured information works for discourse relations whose arguments reside in different sentences. For this purpose, we test the accuracy for discourse relations with the two arguments occurring in the same sentence, one-sentence apart, and two-sentence apart. Table 3 compares the learning systems with/without the structured feature present. From the table, for all three cases, the accuracies drop with the increase of the distances between the two arguments. However, adding the structured information would bring consistent improvement against the baselines regardless of the number of sentence distance. This observation suggests that the structured syntactic information is more helpful for intersentential discourse analysis. We also concern about how the structured information works for identification and classification respectively. Table 4 lists the results for the two sub-tasks. As shown, with the structured information incorporated, the system (Base + Tree Kernel) can boost the performance of the two baselines (Base Features in the first row andBase + Manually selected paths in the second row), for both identification and classification respectively. We also observe that the structural syntactic information is more helpful for classification task which is generally harder than identification. This is in line with the intuition that classification is generally a much harder task. We find that due to the weak modeling of Entity relations, many Entity relations which are non-discourse relation instances are mis-identified as implicit Expansion relations. Nevertheless, it clearly directs our future work. 7.3 System with Temporal Ordering Information To examine the effectiveness of our temporal ordering information, we perform experiments Features Accuracy Explicit Implicit All Base Features 67.1 29 48.6 Base + Manually selected flat path features 70.3 32 52.6 Base + Tree kernel (Min-Expansion) 71.9 38.6 55.6 Base + Tree kernel (Simple-Expansion) 72.1 38.7 55.7 Base + Tree kernel (Full-Expansion) 71.8 38.4 55.4 Sentence Distance 0 (959) 1 (1746) 2 (331) Base Features 52 49.2 35.5 Base + Manually selected flat path features 56.7 52 43.8 Base + Tree Kernel 58.3 55.6 49.7 Tasks Identification Classification Base Features 58.6 50.5 Base + Manually selected flat path features 59.7 52.6 Base + Tree Kernel 63.3 59.3 Table 3. Results of the syntactic structured kernel for discourse relations recognition with arguments in different sentences apart. Table 4. Results of the syntactic structured kernel for simultaneous discourse identification and classification subtasks. Table 2. Results of the syntactic structured kernels on level-1 discourse relation recognition. 717 on simultaneous identification and classification of level-1 discourse relations to compare with using only base feature set as baseline. The results are shown in Table 5. We observe that the use of temporal ordering information increases the accuracy by 3%, 3.6% and 3.2% for Explicit, Implicit and All groups respectively. We conduct chi square statistical significant test on All relations, which shows the performance improvement is statistical significant (𝜌< 0.05). It indicates that temporal ordering information can constrain the discourse relation types inferred within a clause(s)/sentence(s) pair for both explicit and implicit relations. We observe that although temporal ordering information is useful in both explicit and implicit relation recognition, the contributions of the specific information are quite different for the two cases. In our experiments, we use tense and aspectual information for explicit relations, while event ordering information is used for implicit relations. The reason is explicit connective itself provides a strong hint for explicit relation, so tense and aspectual analysis which yields a reliable result can provide additional constraints, thus can help explicit relation recognition. However, event ordering which would inevitably involve more noises will adversely affect the explicit relation recognition performance. On the other hand, for implicit relations with no explicit connective words, tense and aspectual information alone is not enough for discourse analysis. Event ordering can provide more necessary information to further constrain the inferred relations. 7.4 Overall Results We also evaluate our model which combines base features, tree kernel and tense/temporal ordering information together on Explicit, Implicit and All Relations respectively. The overall results are shown in Table 6. 8 Conclusions and Future Works The purpose of this paper is to explore how to make use of the structural syntactic knowledge to do discourse relation recognition. In previous work, syntactic information from parse trees is represented as a set of heuristically selected flat paths or 2-level production rules. However, the features defined this way may not necessarily capture all useful syntactic information provided by the parse trees for discourse analysis. In the paper, we propose a kernel-based method to incorporate the structural information embedded in parse trees. Specifically, we directly utilize the syntactic parse tree as a structure feature, and then apply kernels to such a feature, together with other normal features. The experimental results on PDTB v2.0 show that our kernel-based approach is able to give statistical significant improvement over flat syntactic path method. In addition, we also propose to incorporate temporal ordering information to constrain the interpretation of discourse relations, which also demonstrate statistical significant improvements for discourse relation recognition, both explicit and implicit. In future, we plan to model Entity relations which constitute 24% of Implicit+Entity+No relation cases, thus to improve the accuracy of relation detection. Reference Ben W. and James P. 2007. Automatically Identifying the Arguments of Discourse Connectives. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 92-101. Culotta A. and Sorensen J. 2004. Dependency Tree Kernel for Relation Extraction. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2004), pages 423429. Collins M. and Duffy N. 2001. New Ranking Algorithms for Parsing and Tagging: Kernels over DisFeatures Accuracy Explicit Implicit All Base Features 67.1 29 48.6 Base + Temporal Ordering Information 70.1 32.6 51.8 Relations Accuracy Explicit 74.2 Implicit 40.0 All 57.3 Table 5. Results of tense and temporal order information on level-1 discourse relations. Table 6. Overall results for combined model (Base + Tree Kernel + Tense/Temporal). 718 crete Structures and the Voted Perceptron. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pages 263-270. Collins M. and Duffy N. 2002. Convolution Kernels for Natural Language. NIPS-2001. Haussler D. 1999. Convolution Kernels on Discrete Structures. Technical Report UCS-CRL-99-10, University of California, Santa Cruz. Joachims T. 1999. Making Large-scale SVM Learning Practical. In Advances in Kernel Methods – Support Vector Learning. MIT Press. Knott, A., Oberlander, J., O’Donnel, M., and Mellish, C. 2001. Beyond elaboration: the interaction of relations and focus in coherent text. In T. Sanders, J. Schilperoord, and W. Spooren, editors, Text Representation: Linguistic and Psycholinguistics Aspects, pages 181-196. Benjamins, Amsterdam. Lee A., Prasad R., Joshi A., Dinesh N. and Webber B. 2006. Complexity of dependencies in discourse: are dependencies in discourse more complex than in syntax? In Proceedings of the 5th International Workshop on Treebanks and Linguistic Theories. Prague, Czech Republic, December. Lin Z., Kan M. and Ng H. 2009. Recognizing Implicit Discourse Relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP 2009), Singapore, August. Marcu D. and Echihabi A. 2002. An Unsupervised Approach to Recognizing Discourse Relations. In Proceedings of the 40th Annual Meeting of ACL, pages 368-375. Moschitti A. 2004. A Study on Convolution Kernels for Shallow Semantic Parsing. In Proceedings of the 42th Annual Meeting of the Association for Computational Linguistics (ACL 2004), pages 335342. Pettibone J. and Pon-Barry H. 2003. A Maximum Entropy Approach to Recognizing Discourse Relations in Spoken Language. Working Paper. The Stanford Natural Language Processing Group, June 6. Pitler E., Louis A. and Nenkova A. 2009. Automatic Sense Predication for Implicit Discourse Relations in Text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2009). Prasad R., Dinesh N., Lee A., Miltsakaki E., Robaldo L., Joshi A. and Webber B. 2008. The Penn Discourse TreeBank 2.0. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). Saito M., Yamamoto K. and Sekine S. 2006. Using phrasal patterns to identify discourse relations. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLTNAACL 2006), pages 133–136, New York, USA. Vapnik V. 1995. The Nature of Statistical Learning Theory. Springer-Verlag, New York. Webber Bonnie. 1988. Tense as Discourse Anaphor. Computational Linguistics, 14:61–73. Zelenko D., Aone C. and Richardella A. 2003. Kernel Methods for Relation Extraction. Journal of Machine Learning Research, 3(6):1083-1106. Zhang M., Zhang J. and Su J. Exploring Syntactic Features for Relation Extraction using a Convolution Tree Kernel. In Proceedings of the Human Language Technology conference - North American chapter of the Association for Computational Linguistics annual meeting (HLT-NAACL 2006), New York, USA. 719
2010
73
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 720–728, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Hierarchical Joint Learning: Improving Joint Parsing and Named Entity Recognition with Non-Jointly Labeled Data Jenny Rose Finkel and Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305 {jrfinkel|manning}@cs.stanford.edu Abstract One of the main obstacles to producing high quality joint models is the lack of jointly annotated data. Joint modeling of multiple natural language processing tasks outperforms single-task models learned from the same data, but still underperforms compared to single-task models learned on the more abundant quantities of available single-task annotated data. In this paper we present a novel model which makes use of additional single-task annotated data to improve the performance of a joint model. Our model utilizes a hierarchical prior to link the feature weights for shared features in several single-task models and the joint model. Experiments on joint parsing and named entity recognition, using the OntoNotes corpus, show that our hierarchical joint model can produce substantial gains over a joint model trained on only the jointly annotated data. 1 Introduction Joint learning of multiple types of linguistic structure results in models which produce more consistent outputs, and for which performance improves across all aspects of the joint structure. Joint models can be particularly useful for producing analyses of sentences which are used as input for higher-level, more semantically-oriented systems, such as question answering and machine translation. These high-level systems typically combine the outputs from many low-level systems, such as parsing, named entity recognition (NER) and coreference resolution. When trained separately, these single-task models can produce outputs which are inconsistent with one another, such as named entities which do not correspond to any nodes in the parse tree (see Figure 1 for an example). Moreover, one expects that the different types of annotations should provide useful information to one another, and that modeling them jointly should improve performance. Because a named entity should correspond to a node in the parse tree, strong evidence about either aspect of the model should positively impact the other aspect. However, designing joint models which actually improve performance has proven challenging. The CoNLL 2008 shared task (Surdeanu et al., 2008) was on joint parsing and semantic role labeling, but the best systems (Johansson and Nugues, 2008) were the ones which completely decoupled the tasks. While negative results are rarely published, this was not the first failed attempt at joint parsing and semantic role labeling (Sutton and McCallum, 2005). There have been some recent successes with joint modeling. Zhang and Clark (2008) built a perceptron-based joint segmenter and part-of-speech (POS) tagger for Chinese, and Toutanova and Cherry (2009) learned a joint model of lemmatization and POS tagging which outperformed a pipelined model. Adler and Elhadad (2006) presented an HMMbased approach for unsupervised joint morphological segmentation and tagging of Hebrew, and Goldberg and Tsarfaty (2008) developed a joint model of segmentation, tagging and parsing of Hebrew, based on lattice parsing. No discussion of joint modeling would be complete without mention of (Miller et al., 2000), who trained a Collinsstyle generative parser (Collins, 1997) over a syntactic structure augmented with the template entity and template relations annotations for the MUC-7 shared task. One significant limitation for many joint models is the lack of jointly annotated data. We built a joint model of parsing and named entity recognition (Finkel and Manning, 2009b), which had small gains on parse performance and moderate gains on named entity performance, when compared with single-task models trained on the same data. However, the performance of our model, trained using the OntoNotes corpus (Hovy et al., 2006), fell short of separate parsing and named 720 FRAG INTJ UH Like NP NP DT a NN gross PP IN of NP QP DT a CD [billion NNS dollars]MONEY NP JJ last NN year Figure 1: Example from the data where separate parse and named entity models give conflicting output. entity models trained on larger corpora, annotated with only one type of information. This paper addresses the problem of how to learn high-quality joint models with smaller quantities of jointly-annotated data that has been augmented with larger amounts of single-task annotated data. To our knowledge this work is the first attempt at such a task. We use a hierarchical prior to link a joint model trained on jointly-annotated data with other single-task models trained on single-task annotated data. The key to making this work is for the joint model to share some features with each of the single-task models. Then, the singly-annotated data can be used to influence the feature weights for the shared features in the joint model. This is an important contribution, because it provides all the benefits of joint modeling, but without the high cost of jointly annotating large corpora. We applied our hierarchical joint model to parsing and named entity recognition, and it reduced errors by over 20% on both tasks when compared to a joint model trained on only the jointly annotated data. 2 Related Work Our task can be viewed as an instance of multi-task learning, a machine learning paradigm in which the objective is to simultaneously solve multiple, related tasks for which you have separate labeled training data. Many schemes for multitask learning, including the one we use here, are instances of hierarchical models. There has not been much work on multi-task learning in the NLP community; Daum´e III (2007) and Finkel and Manning (2009a) both build models for multi-domain learning, a variant on domain adaptation where there exists labeled training data for all domains and the goal is to improve performance on all of them. Ando and Zhang (2005) utilized a multitask learner within their semi-supervised algorithm to learn feature representations which were useful across a large number of related tasks. Outside of the NLP community, Elidan et al. (2008) used an undirected Bayesian transfer hierarchy to jointly model the shapes of multiple mammal species. Evgeniou et al. (2005) applied a hierarchical prior to modeling exam scores of students. Other instances of multi-task learning include (Baxter, 1997; Caruana, 1997; Yu et al., 2005; Xue et al., 2007). For a more general discussion of hierarchical models, we direct the reader to Chapter 5 of (Gelman et al., 2003) and Chapter 12 of (Gelman and Hill, 2006). 3 Hierarchical Joint Learning In this section we will discuss the main contribution of this paper, our hierarchical joint model which improves joint modeling performance through the use of single-task models which can be trained on singly-annotated data. Our experiments are on a joint parsing and named entity task, but the technique is more general and only requires that the base models (the joint model and single-task models) share some features. This section covers the general technique, and we will cover the details of the parsing, named entity, and joint models that we use in Section 4. 3.1 Intuitive Overview As discussed, we have a joint model which requires jointly-annotated data, and several singletask models which only require singly-annotated data. The key to our hierarchical model is that the joint model must have features in common with each of the single models, though it can also have features which are only present in the joint model. 721 PARSE JOINT NER µ θ∗ σ∗ θp σp Dp θj σj Dj θn σn Dn Figure 2: A graphical representation of our hierarchical joint model. There are separate base models for just parsing, just NER, and joint parsing and NER. The parameters for these models are linked via a hierarchical prior. Each model has its own set of parameters (feature weights). However, parameters for the features which are shared between the single-task models and the joint model are able to influence one another via a hierarchical prior. This prior encourages the learned weights for the different models to be similar to one another. After training has been completed, we retain only the joint model’s parameters. Our resulting joint model is of higher quality than a comparable joint model trained on only the jointly-annotated data, due to all of the evidence provided by the additional single-task data. 3.2 Formal Model We have a set M of three base models: a parse-only model, an NER-only model and a joint model. These have corresponding loglikelihood functions Lp(Dp; θp), Ln(Dn; θn), and Lj(Dj; θj), where the Ds are the training data for each model, and the θs are the model-specific parameter (feature weight) vectors. These likelihood functions do not include priors over the θs. For representational simplicity, we assume that each of these vectors is the same size and corresponds to the same ordering of features. Features which don’t apply to a particular model type (e.g., parse features in the named entity model) will always be zero, so their weights have no impact on that model’s likelihood function. Conversely, allowing the presence of those features in models for which they do not apply will not influence their weights in the other models because there will be no evidence about them in the data. These three models are linked by a hierarchical prior, and their feature weight vectors are all drawn from this prior. The parameters θ∗for this prior have the same dimensionality as the model-specific parameters θm and are drawn from another, top-level prior. In our case, this top-level prior is a zero-mean Gaussian.1 The graphical representation of our hierarchical model is shown in Figure 2. The log-likelihood of this model is Lhier-joint(D; θ) = (1) X m∈M Lm(Dm; θm) − X i (θm,i −θ∗,i)2 2σ2m ! − X i (θ∗,i −µi)2 2σ2∗ The first summation in this equation computes the log-likelihood of each model, using the data and parameters which correspond to that model, and the prior likelihood of that model’s parameters, based on a Gaussian prior centered around the top-level, non-model-specific parameters θ∗, and with model-specific variance σm. The final summation in the equation computes the prior likelihood of the top-level parameters θ∗according to a Gaussian prior with variance σ∗and mean µ (typically zero). This formulation encourages each base model to have feature weights similar to the top-level parameters (and hence one another). The effects of the variances σm and σ∗warrant some discussion. σ∗has the familiar interpretation of dictating how much the model “cares” about feature weights diverging from zero (or µ). The model-specific variances, σm, have an entirely different interpretation. They dictate how how strong the penalty is for the domain-specific parameters to diverge from one another (via their similarity to θ∗). When σm are very low, then they are encouraged to be very similar, and taken to the extreme this is equivalent to completely tying the parameters between the tasks. When σm are very high, then there is less encouragement for the parameters to be similar, and taken to the extreme this is equivalent to completely decoupling the tasks. We need to compute partial derivatives in order to optimize the model parameters. The partial derivatives for the parameters for each base model m are given by: ∂Lhier(D; θ) ∂θm,i = ∂Lm(Dm, θm) ∂θm,i −θm,i −θ∗,i σ2 d (2) where the first term is the partial derivative according to the base model, and the second term is 1Though we use a zero-mean Gaussian prior, this toplevel prior could take many forms, including an L1 prior, or another hierarchical prior. 722 the prior centered around the top-level parameters. The partial derivatives for the top level parameters θ∗are: ∂Lhier(D; θ) ∂θ∗,i = X m∈M θ∗,i −θm,i σ2m ! −θ∗,i −µi σ2∗ (3) where the first term relates to how far each modelspecific weight vector is from the top-level parameter values, and the second term relates how far each top-level parameter is from zero. When a model has strong evidence for a feature, effectively what happens is that it pulls the value of the top-level parameter for that feature closer to the model-specific value for it. When it has little or no evidence for a feature then it will be pulled in the direction of the top-level parameter for that feature, whose value was influenced by the models which have evidence for that feature. 3.3 Optimization with Stochastic Gradient Descent Inference in joint models tends to be slow, and often requires the use of stochastic optimization in order for the optimization to be tractable. L-BFGS and gradient descent, two frequently used numerical optimization algorithms, require computing the value and partial derivatives of the objective function using the entire training set. Instead, we use stochastic gradient descent. It requires a stochastic objective function, which is meant to be a low computational cost estimate of the real objective function. In most NLP models, such as logistic regression with a Gaussian prior, computing the stochastic objective function is fairly straightforward: you compute the model likelihood and partial derivatives for a randomly sampled subset of the training data. When computing the term for the prior, it must be rescaled by multiplying its value and derivatives by the proportion of the training data used. The stochastic objective function, where bD ⊆D is a randomly drawn subset of the full training set, is given by Lstoch(D; θ) = Lorig( bD; θ) −| bD| |D| X i (θ∗,i)2 2σ2∗ (4) This is a stochastic function, and multiple calls to it with the same D and θ will produce different values because bD is resampled each time. When designing a stochastic objective function, the critical fact to keep in mind is that the summed values and partial derivatives for any split of the data need to be equal to that of the full dataset. In practice, stochastic gradient descent only makes use of the partial derivatives and not the function value, so we will focus the remainder of the discussion on how to rescale the partial derivatives. We now describe the more complicated case of stochastic optimization with a hierarchical objective function. For the sake of simplicity, let us assume that we are using a batch size of one, meaning | bD| = 1 in the above equation. Note that in the hierarchical model, each datum (sentence) in each base model should be weighted equally, so whichever dataset is the largest should be proportionally more likely to have one of its data sampled. For the sampled datum d, we then compute the function value and partial derivatives with respect to the correct base model for that datum. When we rescale the model-specific prior, we rescale based on the number of data in that model’s training set, not the total number of data in all the models combined. Having uniformly randomly drawn datum d ∈S m∈M Dm, let m(d) ∈M tell us to which model’s training data the datum belongs. The stochastic partial derivatives will equal zero for all model parameters θm such that m ̸= m(d), and for θm(d) it becomes: ∂Lhier-stoch(D; θ) ∂θm(d),i = (5) ∂Lm(d)({d}; θm(d)) ∂θm(d),i − 1 |Dm(d)| θm(d),i −θ∗,i σ2 d  Now we will discuss the stochastic partial derivatives with respect to the top-level parameters θ∗, which requires modifying Equation 3. The first term in that equation is a summation over all the models. In the stochastic derivative we only perform this computation for the datum’s model m(d), and then we rescale that value based on the number of data in that datum’s model |Dm(d)|. The second term in that equation is rescaled by the total number of data in all models combined. The stochastic partial derivatives with respect to θ∗become: ∂Lhier-stoch(D; θ) ∂θ∗,i = (6) 1 |Dm(d)| θ∗,i −θm(d),i σ2m  − 1 P m∈M |Dm| θ∗,i σ2∗  where for conciseness we omit µ under the assumption that it equals zero. An equally correct formulation for the partial derivative of θ∗is to simply rescale Equation 3 by the total number of data in all models. Early experiments found that both versions gave similar performance, but the latter was significantly 723 B-PER Hilary I-PER Clinton O visited B-GPE Haiti O . (a) PER Hilary Clinton O visited GPE Haiti O . (b) ROOT PER PER-i Hilary PER-i Clinton O visited GPE GPE-i Haiti O . (c) Figure 3: A linear-chain CRF (a) labels each word, whereas a semi-CRF (b) labels entire entities. A semi-CRF can be represented as a tree (c), where i indicates an internal node for an entity. slower to compute because it required summing over the parameter vectors for all base models instead of just the vector for the datum’s model. When using a batch size larger than one, you compute the given functions for each datum in the batch and then add them together. 4 Base Models Our hierarchical joint model is composed of three separate models, one for just named entity recognition, one for just parsing, and one for joint parsing and named entity recognition. In this section we will review each of these models individually. 4.1 Semi-CRF for Named Entity Recognition For our named entity recognition model we use a semi-CRF (Sarawagi and Cohen, 2004; Andrew, 2006). Semi-CRFs are very similar to the more popular linear-chain CRFs, but with several key advantages. Semi-CRFs segment and label the text simultaneously, whereas a linear-chain CRF will only label each word, and segmentation is implied by the labels assigned to the words. When doing named entity recognition, a semi-CRF will have one node for each entity, unlike a regular CRF which will have one node for each word.2 See Figure 3a-b for an example of a semi-CRF and a linear-chain CRF over the same sentence. Note that the entity Hilary Clinton has one node in the semi-CRF representation, but two nodes in the linear-chain CRF. Because different segmentations have different model structures in a semiCRF, one has to consider all possible structures (segmentations) as well as all possible labelings. It is common practice to limit segment length in order to speed up inference, as this allows for the use of a modified version of the forward-backward algorithm. When segment length is not restricted, the inference procedure is the same as that used in parsing (Finkel and Manning, 2009c).3 In this work we do not enforce a length restriction, and directly utilize the fact that the model can be transformed into a parsing model. Figure 3c shows a parse tree representation of a semi-CRF. While a linear-chain CRF allows features over adjacent words, a semi-CRF allows them over adjacent segments. This means that a semi-CRF can utilize all features used by a linear-chain CRF, and can also utilize features over entire segments, such as First National Bank of New York City, instead of just adjacent words like First National and Bank of. Let y be a vector representing the labeling for an entire sentence. yi encodes the label of the ith segment, along with the span of words the segment encompasses. Let θ be the feature weights, and f(s, yi, yi−1) the feature function over adjacent segments yi and yi−1 in sentence s.4 The log likelihood of a semi-CRF for a single sentence s is given by: L(y|s; θ) = 1 Zs |y| X i=1 exp{θ · f(s, yi, yi−1)} (7) The partition function Zs serves as a normalizer. It requires summing over the set ys of all possible segmentations and labelings for the sentence s: Zs = X y∈ys |y| X i=1 exp{θ · f(s, yi, yi−1)} (8) 2Both models will have one node per word for non-entity words. 3While converting a semi-CRF into a parser results in much slower inference than a linear-chain CRF, it is still significantly faster than a treebank parser due to the reduced number of labels. 4There can also be features over single entities, but these can be encoded in the feature function over adjacent entities, so for notational simplicity we do not include an additional term for them. 724 FRAG INTJ UH Like NP NP DT a NN gross PP IN of NP-MONEY QP-MONEY-i DT-MONEY-i a CD-MONEY-i billion NNS-MONEY-i dollars NP JJ last NN year Figure 4: An example of a sentence jointly annotated with parse and named entity information. Named entities correspond to nodes in the tree, and the parse label is augmented with the named entity information. Because we use a tree representation, it is easy to ensure that the features used in the NER model are identical to those in the joint parsing and named entity model, because the joint model (which we will discuss in Section 4.3) is also based on a tree representation where each entity corresponds to a single node in the tree. 4.2 CRF-CFG for Parsing Our parsing model is the discriminatively trained, conditional random field-based context-free grammar parser (CRF-CFG) of (Finkel et al., 2008). The relationship between a CRF-CFG and a PCFG is analogous to the relationship between a linearchain CRF and a hidden Markov model (HMM) for modeling sequence data. Let t be a complete parse tree for sentence s, and each local subtree r ∈t encodes both the rule from the grammar, and the span and split information (e.g NP(7,9) →JJ(7,8)NN(8,9) which covers the last two words in Figure 1). The feature function f(r, s) computes the features, which are defined over a local subtree r and the words of the sentence. Let θ be the vector of feature weights. The log-likelihood of tree t over sentence s is: L(t|s; θ) = 1 Zs X r∈t exp{θ · f(r, s)} (9) To compute the partition function Zs, which serves to normalize the function, we must sum over τ(s), the set of all possible parse trees for sentence s. The partition function is given by: Zs = X t′∈τ(s) X r∈t′ exp{θ · f(r, s)} We also need to compute the partial derivatives which are used during optimization. Let fi(r, s) be the value of feature i for subtree r over sentence s, and let Eθ[fi|s] be the expected value of feature i in sentence s, based on the current model parameters θ. The partial derivatives of θ are then given by ∂L ∂θi = X (t,s)∈D X r∈t fi(r, s)  −Eθ[fi|s] ! (10) Just like with a linear-chain CRF, this equation will be zero when the feature expectations in the model equal the feature values in the training data. A variant of the inside-outside algorithm is used to efficiently compute the likelihood and partial derivatives. See (Finkel et al., 2008) for details. 4.3 Joint Model of Parsing and Named Entity Recognition Our base joint model for parsing and named entity recognition is the same as (Finkel and Manning, 2009b), which is also based on the discriminative parser discussed in the previous section. The parse tree structure is augmented with named entity information; see Figure 4 for an example. The features in the joint model are designed in a manner that fits well with the hierarchical joint model: some are over just the parse structure, some are over just the named entities, and some are over the joint structure. The joint model shares the NER and parse features with the respective single-task models. Features over the joint structure only appear in the joint model, and their weights are only indirectly influenced by the singly-annotated data. In the parsing model, the grammar consists of only the rules observed in the training data. In the joint model, the grammar is augmented with ad725 Training Testing Range # Sent. Range # Sent. ABC 0–55 1195 56–69 199 MNB 0–17 509 18–25 245 NBC 0–29 589 30–39 149 PRI 0–89 1704 90–112 394 VOA 0–198 1508 199–264 385 Table 1: Training and test set sizes for the five datasets in sentences. The file ranges refer to the numbers within the names of the original OntoNotes files. ditional joint rules which are composed by adding named entity information to existing parse rules. Because the grammars are based on the observed data, and the two models have different data, they will have somewhat different grammars. In our hierarchical joint model, we added all observed rules from the joint data (stripped of named entity information) to the parse-only grammar, and we added all observed rules from the parse-only data to the grammar for the joint model, and augmented them with named entity information in the same manner as the rules observed in the joint data. Earlier we said that the NER-only model uses identical named entity features as the joint model (and similarly for the parse-only model), but this is not quite true. They use identical feature templates, such as word, but different realizations of those features will occur with the different datasets. For instance, the NER-only model may have word=Nigel as a feature, but because Nigel never occurs in the joint data, that feature is never manifested and no weight is learned for it. We deal with this similarly to how we dealt with the grammar: if a named entity feature occurs in either the joint data or the NER-only data, then both models will learn a weight for that feature. We do the same thing for the parse features. This modeling decision gives the joint model access to potentially useful features to which it would not have had access if it were not part of the hierarchical model.5 5 Experiments and Discussion We compared our hierarchical joint model to a regular (non-hierarchical) joint model, and to parseonly and NER-only models. Our baseline experiments were modeled after those in (Finkel and Manning, 2009b), and while our results were not identical (we updated to a newer release of the data), we had similar results and found the same general trends with respect to how the joint 5In the non-hierarchical setting, you could include those features in the optimization, but, because there would be no evidence about them, their weights would be zero due to regularization. model improved on the single models. We used OntoNotes 3.0 (Hovy et al., 2006), and made the same data modifications as (Finkel and Manning, 2009b) to ensure consistency between the parsing and named entity annotations. Table 2 has our complete set of results, and Table 1 gives the number of training and test sentences. For each section of the data (ABC, MNB, NBC, PRI, VOA) we ran experiments training a linear-chain CRF on only the named entity information, a CRF-CFG parser on only the parse information, a joint parser and named entity recognizer, and our hierarchical model. For the hierarchical model, we used the CNN portion of the data (5093 sentences) for the extra named entity data (and ignored the parse trees) and the remaining portions combined for the extra parse data (and ignored the named entity annotations). We used σ∗= 1.0 and σm = 0.1, which were chosen based on early experiments on development data. Small changes to σm do not appear to have much influence, but larger changes do. We similarly decided how many iterations to run stochastic gradient descent for (20) based on early development data experiments. We did not run this experiment on the CNN portion of the data, because the CNN data was already being used as the extra NER data. As Table 2 shows, the hierarchical model did substantially better than the joint model overall, which is not surprising given the extra data to which it had access. Looking at the smaller corpora (NBC and MNB) we see the largest gains, with both parse and NER performance improving by about 8% F1. ABC saw about a 6% gain on both tasks, and VOA saw a 1% gain on both. Our one negative result is in the PRI portion: parsing improves slightly, but NER performance decreases by almost 2%. The same experiment on development data resulted in a performance increase, so we are not sure why we saw a decrease here. One general trend, which is not surprising, is that the hierarchical model helps the smaller datasets more than the large ones. The source of this is twofold: lower baselines are generally easier to improve upon, and the larger corpora had less singlyannotated data to provide improvements, because it was composed of the remaining, smaller, sections of OntoNotes. We found it interesting that the gains tended to be similar on both tasks for all datasets, and believe this fact is due to our use of roughly the same amount of singly-annotated data for both parsing and NER. One possible conflating factor in these experiments is that of domain drift. While we tried to 726 Parse Labeled Bracketing Named Entities Precision Recall F1 Precision Recall F1 ABC Just Parse 69.8% 69.9% 69.8% – Just NER – 77.0% 75.1% 76.0% Baseline Joint 70.2% 70.5% 70.3% 79.2% 76.5% 77.8% Hierarchical Joint 75.5% 74.4% 74.9% 85.1% 82.7% 83.9% MNB Just Parse 61.7% 65.5% 63.6% – Just NER – 69.6% 49.0% 57.5% Baseline Joint 61.7% 66.2% 63.9% 70.9% 63.5% 67.0% Hierarchical Joint 72.6% 70.2% 71.4% 74.4% 75.5% 74.9% NBC Just Parse 59.9% 63.9% 61.8% – Just NER – 63.9% 60.9% 62.4% Baseline Joint 59.3% 64.2% 61.6% 68.9% 62.8% 65.7% Hierarchical Joint 70.4% 69.9% 70.2% 72.9% 74.0% 73.4% PRI Just Parse 78.6% 77.0% 76.9% – Just NER – 81.3% 77.8% 79.5% Baseline Joint 78.0% 78.6% 78.3% 86.3% 86.0% 86.2% Hierarchical Joint 79.2% 78.5% 78.8% 84.2% 85.5% 84.8% VOA Just Parse 77.5% 76.5% 77.0% – Just NER – 85.2% 80.3% 82.7% Baseline Joint 77.2% 77.8% 77.5% 87.5% 86.7% 87.1% Hierarchical Joint 79.8% 77.8% 78.8% 87.7% 88.9% 88.3% Table 2: Full parse and NER results for the six datasets. Parse trees were evaluated using evalB, and named entities were scored using micro-averaged F-measure (conlleval). get the most similar annotated data available – data which was annotated by the same annotators, and all of which is broadcast news – these are still different domains. While this is likely to have a negative effect on results, we also believe this scenario to be a more realistic than if it were to also be data drawn from the exact same distribution. 6 Conclusion In this paper we presented a novel method for improving joint modeling using additional data which has not been labeled with the entire joint structure. While conventional wisdom says that adding more training data should always improve performance, this work is the first to our knowledge to incorporate singly-annotated data into a joint model, thereby providing a method for this additional data, which cannot be directly used by the non-hierarchical joint model, to help improve joint modeling performance. We built single-task models for the non-jointly labeled data, designing those single-task models so that they have features in common with the joint model, and then linked all of the different single-task and joint models via a hierarchical prior. We performed experiments on joint parsing and named entity recognition, and found that our hierarchical joint model substantially outperformed a joint model which was trained on only the jointly annotated data. Future directions for this work include automatically learning the variances, σm and σ∗in the hierarchical model, so that the degree of information sharing between the models is optimized based on the training data available. We are also interested in ways to modify the objective function to place more emphasis on learning a good joint model, instead of equally weighting the learning of the joint and single-task models. Acknowledgments Many thanks to Daphne Koller for discussions which led to this work, and to Richard Socher for his assistance and input. Thanks also to our anonymous reviewers and Yoav Goldberg for useful feedback on an earlier draft of this paper. This material is based upon work supported by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the Air Force Research Laboratory (AFRL). The first author is additionally supported by a Stanford Graduate Fellowship. 727 References Meni Adler and Michael Elhadad. 2006. An unsupervised morpheme-based hmm for hebrew morphological disambiguation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 665–672, Morristown, NJ, USA. Association for Computational Linguistics. Rie Kubota Ando and Tong Zhang. 2005. A highperformance semi-supervised learning method for text chunking. In ACL ’05: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 1–9, Morristown, NJ, USA. Association for Computational Linguistics. Galen Andrew. 2006. A hybrid markov/semi-markov conditional random field for sequence segmentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2006). J. Baxter. 1997. A bayesian/information theoretic model of learning to learn via multiple task sampling. In Machine Learning, volume 28. R. Caruana. 1997. Multitask learning. In Machine Learning, volume 28. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In ACL 1997. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In Conference of the Association for Computational Linguistics (ACL), Prague, Czech Republic. Gal Elidan, Benjamin Packer, Geremy Heitz, and Daphne Koller. 2008. Convex point estimation using undirected bayesian transfer hierarchies. In UAI 2008. T. Evgeniou, C. Micchelli, and M. Pontil. 2005. Learning multiple tasks with kernel methods. In Journal of Machine Learning Research. Jenny Rose Finkel and Christopher D. Manning. 2009a. Hierarchical bayesian domain adaptation. In Proceedings of the North American Association of Computational Linguistics (NAACL 2009). Jenny Rose Finkel and Christopher D. Manning. 2009b. Joint parsing and named entity recognition. In Proceedings of the North American Association of Computational Linguistics (NAACL 2009). Jenny Rose Finkel and Christopher D. Manning. 2009c. Nested named entity recognition. In Proceedings of EMNLP 2009. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, feature-based conditional random field parsing. In ACL/HLT-2008. Andrew Gelman and Jennifer Hill. 2006. Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press. A. Gelman, J. B. Carlin, H. S. Stern, and Donald D. B. Rubin. 2003. Bayesian Data Analysis. Chapman & Hall. Yoav Goldberg and Reut Tsarfaty. 2008. A single generative model for joint morphological segmentation and syntactic parsing. In Proceedings of ACL-08: HLT, pages 371–379, Columbus, Ohio, June. Association for Computational Linguistics. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In HLT-NAACL 2006. Richard Johansson and Pierre Nugues. 2008. Dependencybased syntactic-semantic analysis with propbank and nombank. In CoNLL ’08: Proceedings of the Twelfth Conference on Computational Natural Language Learning, pages 183–187, Morristown, NJ, USA. Association for Computational Linguistics. Scott Miller, Heidi Fox, Lance Ramshaw, and Ralph Weischedel. 2000. A novel use of statistical parsing to extract information from text. In In 6th Applied Natural Language Processing Conference, pages 226–233. Sunita Sarawagi and William W. Cohen. 2004. Semi-markov conditional random fields for information extraction. In In Advances in Neural Information Processing Systems 17, pages 1185–1192. Mihai Surdeanu, Richard Johansson, Adam Meyers, Llu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of the 12th Conference on Computational Natural Language Learning (CoNLL), Manchester, UK. Charles Sutton and Andrew McCallum. 2005. Joint parsing and semantic role labeling. In Conference on Natural Language Learning (CoNLL). Kristina Toutanova and Colin Cherry. 2009. A global model for joint lemmatization and part-of-speech prediction. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 486–494, Suntec, Singapore, August. Association for Computational Linguistics. Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. 2007. Multi-task learning for classification with dirichlet process priors. J. Mach. Learn. Res., 8. Kai Yu, Volker Tresp, and Anton Schwaighofer. 2005. Learning gaussian processes from multiple tasks. In ICML ’05: Proceedings of the 22nd international conference on Machine learning. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and POS tagging using a single perceptron. In ACL 2008. 728
2010
74
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 729–738, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Detecting Errors in Automatically-Parsed Dependency Relations Markus Dickinson Indiana University [email protected] Abstract We outline different methods to detect errors in automatically-parsed dependency corpora, by comparing so-called dependency rules to their representation in the training data and flagging anomalous ones. By comparing each new rule to every relevant rule from training, we can identify parts of parse trees which are likely erroneous. Even the relatively simple methods of comparison we propose show promise for speeding up the annotation process. 1 Introduction and Motivation Given the need for high-quality dependency parses in applications such as statistical machine translation (Xu et al., 2009), natural language generation (Wan et al., 2009), and text summarization evaluation (Owczarzak, 2009), there is a corresponding need for high-quality dependency annotation, for the training and evaluation of dependency parsers (Buchholz and Marsi, 2006). Furthermore, parsing accuracy degrades unless sufficient amounts of labeled training data from the same domain are available (e.g., Gildea, 2001; Sekine, 1997), and thus we need larger and more varied annotated treebanks, covering a wide range of domains. However, there is a bottleneck in obtaining annotation, due to the need for manual intervention in annotating a treebank. One approach is to develop automatically-parsed corpora (van Noord and Bouma, 2009), but a natural disadvantage with such data is that it contains parsing errors. Identifying the most problematic parses for human post-processing could combine the benefits of automatic and manual annotation, by allowing a human annotator to efficiently correct automatic errors. We thus set out in this paper to detect errors in automatically-parsed data. If annotated corpora are to grow in scale and retain a high quality, annotation errors which arise from automatic processing must be minimized, as errors have a negative impact on training and evaluation of NLP technology (see discussion and references in Boyd et al., 2008, sec. 1). There is work on detecting errors in dependency corpus annotation (Boyd et al., 2008), but this is based on finding inconsistencies in annotation for identical recurring strings. This emphasis on identical strings can result in high precision, but many strings do not recur, negatively impacting the recall of error detection. Furthermore, since the same strings often receive the same automatic parse, the types of inconsistencies detected are likely to have resulted from manual annotation. While we can build from the insight that simple methods can provide reliable annotation checks, we need an approach which relies on more general properties of the dependency structures, in order to develop techniques which work for automatically-parsed corpora. Developing techniques to detect errors in parses in a way which is independent of corpus and parser has fairly broad implications. By using only the information available in a training corpus, the methods we explore are applicable to annotation error detection for either hand-annotated or automatically-parsed corpora and can also provide insights for parse reranking (e.g., Hall and Nov´ak, 2005) or parse revision (Attardi and Ciaramita, 2007). Although we focus only on detecting errors in automatically-parsed data, similar techniques have been applied for hand-annotated data (Dickinson, 2008; Dickinson and Foster, 2009). Our general approach is based on extracting a grammar from an annotated corpus and comparing dependency rules in a new (automaticallyannotated) corpus to the grammar. Roughly speaking, if a dependency rule—which represents all the dependents of a head together (see section 3.1)— does not fit well with the grammar, it is flagged as potentially erroneous. The methods do not have to be retrained for a given parser’s output (e.g., 729 Campbell and Johnson, 2002), but work by comparing any tree to what is in the training grammar (cf. also approaches stacking hand-written rules on top of other parsers (Bick, 2007)). We propose to flag erroneous parse rules, using information which reflects different grammatical properties: POS lookup, bigram information, and full rule comparisons. We build on a method to detect so-called ad hoc rules, as described in section 2, and then turn to the main approaches in section 3. After a discussion of a simple way to flag POS anomalies in section 4, we evaluate the different methods in section 5, using the outputs from two different parsers. The methodology proposed in this paper is easy to implement and independent of corpus, language, or parser. 2 Approach We take as a starting point two methods for detecting ad hoc rules in constituency annotation (Dickinson, 2008). Ad hoc rules are CFG productions extracted from a treebank which are “used for specific constructions and unlikely to be used again,” indicating annotation errors and rules for ungrammaticalities (see also Dickinson and Foster, 2009). Each method compares a given CFG rule to all the rules in a treebank grammar. Based on the number of similar rules, a score is assigned, and rules with the lowest scores are flagged as potentially ad hoc. This procedure is applicable whether the rules in question are from a new data set—as in this paper, where parses are compared to a training data grammar—or drawn from the treebank grammar itself (i.e., an internal consistency check). The two methods differ in how the comparisons are done. First, the bigram method abstracts a rule to its bigrams. Thus, a rule such as NP → JJ NN provides support for NP →DT JJ JJ NN, in that it shares the JJ NN sequence. By contrast, in the other method, which we call the whole rule method,1 a rule is compared in its totality to the grammar rules, using Levenshtein distance. There is no abstraction, meaning all elements are present—e.g., NP →DT JJ JJ NN is very similar to NP →DT JJ NN because the sequences differ by only one category. While previously used for constituencies, what is at issue is simply the valency of a rule, where by valency we refer to a head and its entire set 1This is referred to whole daughters in Dickinson (2008), but the meaning of “daughters” is less clear for dependencies. of arguments and adjuncts (cf. Przepi´orkowski, 2006)—that is, a head and all its dependents. The methods work because we expect there to be regularities in valency structure in a treebank grammar; non-conformity to such regularities indicates a potential problem. 3 Ad hoc rule detection 3.1 An appropriate representation To capture valency, consider the dependency tree from the Talbanken05 corpus (Nilsson and Hall, 2005) in figure 1, for the Swedish sentence in (1), which has four dependency pairs.2 (1) Det it g˚ar goes bara just inte not ihop together . ‘It just doesn’t add up.’ SS MA NA PL Det g˚ar bara inte ihop PO VV AB AB AB Figure 1: Dependency graph example On a par with constituency rules, we define a grammar rule as a dependency relation rewriting as a head with its sequence of POS/dependent pairs (cf. Kuhlmann and Satta, 2009), as in figure 2. This representation supports the detection of idiosyncracies in valency.3 1. TOP →root ROOT:VV 2. ROOT →SS:PO VV MA:AB NA:AB PL:AB 3. SS →PO 5. NA →AB 4. MA →AB 6. PL →AB Figure 2: Rule representation for (1) For example, for the ROOT category, the head is a verb (VV), and it has 4 dependents. The extent to which this rule is odd depends upon whether comparable rules—i.e., other ROOT rules or other VV rules (see section 3.2)—have a similar set of dependents. While many of the other rules seem rather spare, they provide useful information, showing categories which have no dependents. With a TOP rule, we have a rule for every 2Category definitions are in appendix A. 3Valency is difficult to define for coordination and is specific to an annotation scheme. We leave this for the future. 730 head, including the virtual root. Thus, we can find anomalous rules such as TOP →root ROOT:AV ROOT:NN, where multiple categories have been parsed as ROOT. 3.2 Making appropriate comparisons In comparing rules, we are trying to find evidence that a particular (parsed) rule is valid by examining the evidence from the (training) grammar. Units of comparison To determine similarity, one can compare dependency relations, POS tags, or both. Valency refers to both properties, e.g., verbs which allow verbal (POS) subjects (dependency). Thus, we use the pairs of dependency relations and POS tags as the units of comparison. Flagging individual elements Previous work scored only entire rules, but some dependencies are problematic and others are not. Thus, our methods score individual elements of a rule. Comparable rules We do not want to compare a rule to all grammar rules, only to those which should have the same valents. Comparability could be defined in terms of a rule’s dependency relation (LHS) or in terms of its head. Consider the four different object (OO) rules in (2). These vary a great deal, and much of the variability comes from the fact that they are headed by different POS categories, which tend to have different selectional properties. The head POS thus seems to be predictive of a rule’s valency. (2) a. OO →PO b. OO →DT:EN AT:AJ NN ET:VV c. OO →SS:PO QV VG:VV d. OO →DT:PO AT:AJ VN But we might lose information by ignoring rules with the same left-hand side (LHS). Our approach is thus to take the greater value of scores when comparing to rules either with the same dependency relation or with the same head. A rule has multiple chances to prove its value, and low scores will only be for rules without any type of support. Taking these points together, for a given rule of interest r, we assign a score (S) to each element ei in r, where r = e1...em by taking the maximum of scores for rules with the same head (h) or same LHS (lhs), as in (3). For the first element in (2b), for example, S(DT:EN) = max{s(DT:EN, NN), s(DT:EN, OO)}. The question is now how we define s(ei, c) for the comparable element c. (3) S(ei) = max{s(ei, h), s(ei, lhs)} 3.3 Whole rule anomalies 3.3.1 Motivation The whole rule method compares a list of a rule’s dependents to rules in a database, and then flags rule elements without much support. By using all dependents as a basis for comparison, this method detects improper dependencies (e.g., an adverb modifying a noun), dependencies in the wrong overall location of a rule (e.g., an adverb before an object), and rules with unnecessarily long argument structures. For example, in (4), we have an improper relation between skall (‘shall’) and sambeskattas (‘be taxed together’), as in figure 3. It is parsed as an adverb (AA), whereas it should be a verb group (VG). The rule for this part of the tree is +F →++:++ SV AA:VV, and the AA:VV position will be low-scoring because the ++:++ SV context does not support it. (4) Makars spouses’ ¨ovriga other inkomster incomes ¨ar are B-inkomster B-incomes och and skall shall som as tidigare previously sambeskattas be taxed togeher . . ‘The other incomes of spouses are B-incomes and shall, as previously, be taxed together.’ ++ +F UK KA VG och skall som tidigare sambeskattas ++ SV UK AJ VV ++ +F UK SS AA och skall som tidigare sambeskattas ++ SV UK AJ VV Figure 3: Wrong label (top=gold, bottom=parsed) 3.3.2 Implementation The method we use to determine similarity arises from considering what a rule is like without a problematic element. Consider +F →++:++ SV AA:VV from figure 3, where AA should be a different category (VG). The rule without this error, +F →++:++ SV, starts several rules in the 731 training data, including some with VG:VV as the next item. The subrule ++:++ SV seems to be reliable, whereas the subrules containing AA:VV (++:++ AA:VV and SV AA:VV) are less reliable. We thus determine reliability by seeing how often each subsequence occurs in the training rule set. Throughout this paper, we use the term subrule to refer to a rule subsequence which is exactly one element shorter than the rule it is a component of. We examine subrules, counting their frequency as subrules, not as complete rules. For example, TOP rules with more than one dependent are problematic, e.g., TOP →root ROOT:AV ROOT:NN. Correspondingly, there are no rules with three elements containing the subrule root ROOT:AV. We formalize this by setting the score s(ei, c) equal to the summation of the frequencies of all comparable subrules containing ei from the training data, as in (5), where B is the set of subrules of r with length one less. (5) s(ei, c) = P sub∈B:ei∈sub C(sub, c) For example, with c = +F, the frequency of +F →++:++ SV as a subrule is added to the scores for ++:++ and SV. In this case, +F →++:++ SV VG:BV, +F →++:++ SV VG:AV, and +F →++:++ SV VG:VV all add support for +F → ++:++ SV being a legitimate subrule. Thus, ++:++ and SV are less likely to be the sources of any problems. Since +F →SV AA:VV and +F → ++:++ AA:VV have very little support in the training data, AA:VV receives a low score. Note that the subrule count C(sub, c) is different than counting the number of rules containing a subrule, as can be seen with identical elements. For example, for SS →VN ET:PR ET:PR, C(VN ET:PR, SS) = 2, in keeping with the fact that there are 2 pieces of evidence for its legitimacy. 3.4 Bigram anomalies 3.4.1 Motivation The bigram method examines relationships between adjacent sisters, complementing the whole rule method by focusing on local properties. For (6), for example, we find the gold and parsed trees in figure 4. For the long parsed rule TA →PR HD:ID HD:ID IR:IR AN:RO JR:IR, all elements get low whole rule scores, i.e., are flagged as potentially erroneous. But only the final elements have anomalous bigrams: HD:ID IR:IR, IR:IR AN:RO, and AN:RO JR:IR all never occur. (6) N¨ar when det it g¨aller concerns inkomst˚aret the income year 1971 1971 ( ( taxerings˚aret assessment year 1972 1972 ) ) skall shall barnet the child ... ... ‘Concerning the income year of 1971 (assessment year 1972), the child . . . ’ 3.4.2 Implementation To obtain a bigram score for an element, we simply add together the bigrams which contain the element in question, as in (7). (7) s(ei, c) = C(ei−1ei, c) + C(eiei+1, c) Consider the rule from figure 4. With c = TA, the bigram HD:ID IR:IR never occurs, so both HD:ID and IR:IR get 0 added to their score. HD:ID HD:ID, however, is a frequent bigram, so it adds weight to HD:ID, i.e., positive evidence comes from the bigram on the left. If we look at IR:IR, on the other hand, IR:IR AN:RO occurs 0 times, and so IR:IR gets a total score of 0. Both scoring methods treat each element independently. Every single element could be given a low score, even though once one is corrected, another would have a higher score. Future work can examine factoring in all elements at once. 4 Additional information The methods presented so far have limited definitions of comparability. As using complementary information has been useful in, e.g., POS error detection (Loftsson, 2009), we explore other simple comparable properties of a dependency grammar. Namely, we include: a) frequency information of an overall dependency rule and b) information on how likely each dependent is to be in a relation with its head, described next. 4.1 Including POS information Consider PA →SS:NN XX:XX HV OO:VN, as illustrated in figure 5 for the sentence in (8). This rule is entirely correct, yet the XX:XX position has low whole rule and bigram scores. (8) Uppgift information om of vilka which orter neighborhood som who har has utk¨orning delivery finner find Ni you ocks˚a also i in ... ... ‘You can also find information about which neighborhoods have delivery services in . . . ’ 732 AA HD HD DT PA IR DT AN JR ... N¨ar det g¨aller inkomst˚aret 1971 ( taxerings˚aret 1972 ) ... PR ID ID NN RO IR NN RO IR ... TA HD HD PA ET IR DT AN JR ... N¨ar det g¨aller inkomst˚aret 1971 ( taxerings˚aret 1972 ) ... PR ID ID NN RO IR NN RO IR ... Figure 4: A rule with extra dependents (top=gold, bottom=parsed) ET DT SS XX PA OO Uppgift om vilka orter som har utk¨orning NN PR PO NN XX HV VN Figure 5: Overflagging (gold=parsed) One method which does not have this problem of overflagging uses a “lexicon” of POS tag pairs, examining relations between POS, irrespective of position. We extract POS pairs, note their dependency relation, and add a L/R to the label to indicate which is the head (Boyd et al., 2008). Additionally, we note how often two POS categories occur as a non-depenency, using the label NIL, to help determine whether there should be any attachment. We generate NILs by enumerating all POS pairs in a sentence. For example, from figure 5, the parsed POS pairs include NN PR 7→ETL, NN PO 7→NIL, etc. We convert the frequencies to probabilities. For example, of 4 total occurrences of XX HV in the training data, 2 are XX-R (cf. figure 5). A probability of 0.5 is quite high, given that NILs are often the most frequent label for POS pairs. 5 Evaluation In evaluating the methods, our main question is: how accurate are the dependencies, in terms of both attachment and labeling? We therefore currently examine the scores for elements functioning as dependents in a rule. In figure 5, for example, for har (‘has’), we look at its score within ET → PR PA:HV and not when it functions as a head, as in PA →SS:NN XX:XX HV OO:VN. Relatedly, for each method, we are interested in whether elements with scores below a threshold have worse attachment accuracy than scores above, as we predict they do. We can measure this by scoring each testing data position below the threshold as a 1 if it has the correct head and dependency relation and a 0 otherwise. These are simply labeled attachment scores (LAS). Scoring separately for positions above and below a threshold views the task as one of sorting parser output into two bins, those more or less likely to be correctly parsed. For development, we also report unlabeled attachement scores (UAS). Since the goal is to speed up the post-editing of corpus data by flagging erroneous rules, we also report the precision and recall for error detection. We count either attachment or labeling errors as an error, and precision and recall are measured with respect to how many errors are found below the threshold. For development, we use two Fscores to provide a measure of the settings to examine across language, corpus, and parser conditions: the balanced F1 measure and the F0.5 measure, weighing precision twice as much. Precision is likely more important in this context, so as to prevent annotators from sorting through too many false positives. In practice, one way to use these methods is to start with the lowest thresholds and work upwards until there are too many non-errors. To establish a basis for comparison, we compare 733 method performance to a parser on its own.4 By examining the parser output without any automatic assistance, how often does a correction need to be made? 5.1 The data All our data comes from the CoNLL-X Shared Task (Buchholz and Marsi, 2006), specifically the 4 data sets freely available online. We use the Swedish Talbanken data (Nilsson and Hall, 2005) and the transition-based dependency parser MaltParser (Nivre et al., 2007), with the default settings, for developing the method. To test across languages and corpora, we use MaltParser on the other 3 corpora: the Danish DDT (Kromann, 2003), Dutch Alpino (van der Beek et al., 2002), and Portuguese Bosque data (Afonso et al., 2002). Then, we present results using the graph-based parser MSTParser (McDonald and Pereira, 2006), again with default settings, to test the methods across parsers. We use the gold standard POS tags for all experiments. 5.2 Development data In the first line of table 1, we report the baseline MaltParser accuracies on the Swedish test data, including baseline error detection precision (=1LASb), recall, and (the best) F-scores. In the rest of table 1, we report the best-performing results for each of the methods,5 providing the number of rules below and above a particular threshold, along with corresponding UAS and LAS values. To get the raw number of identified rules, multiply the number of corpus position below a threshold (b) times the error detection precision (P). For example, the bigram method with a threshold of 39 leads to finding 283 errors (455 × .622). Dependency elements with frequency below the lowest threshold have lower attachment scores (66.6% vs. 90.1% LAS), showing that simply using a complete rule helps sort dependencies. However, frequency thresholds have fairly low precision, i.e., 33.4% at their best. The whole rule and bigram methods reveal greater precision in identifying problematic dependencies, isolating elements with lower UAS and LAS scores than with frequency, along with corresponding greater pre4One may also use parser confidence or parser revision methods as a basis of comparison, but we are aware of no systematic evaluation of these approaches for detecting errors. 5Freq=rule frequency, WR=whole rule, Bi=bigram, POS=POS-based (POS scores multiplied by 10,000) cision and F-scores. The bigram method is more fine-grained, identifying small numbers of rule elements at each threshold, resulting in high error detection precision. With a threshold of 39, for example, we find over a quarter of the parser errors with 62% precision, from this one piece of information. For POS information, we flag 23.6% of the cases with over 60% precision (at 81.6). Taking all these results together, we can begin to sort more reliable from less reliable dependency tree elements, using very simple information. Additionally, these methods naturally group cases together by linguistic properties (e.g., adverbialverb dependencies within a particualr context), allowing a human to uncover the principle behind parse failure and ajudicate similar cases at the same time (cf. Wallis, 2003). 5.3 Discussion Examining some of the output from the Talbanken test data by hand, we find that a prominent cause of false positives, i.e., correctly-parsed cases with low scores, stems from low-frequency dependency-POS label pairs. If the dependency rarely occurs in the training data with the particular POS, then it receives a low score, regardless of its context. For example, the parsed rule TA →IG:IG RO has a correct dependency relation (IG) between the POS tags IG and its head RO, yet is assigned a whole rule score of 2 and a bigram score of 20. It turns out that IG:IG only occurs 144 times in the training data, and in 11 of those cases (7.6%) it appears immediately before RO. One might consider normalizing the scores based on overall frequency or adjusting the scores to account for other dependency rules in the sentence: in this case, there may be no better attachment. Other false positives are correctly-parsed elements that are a part of erroneous rules. For instance, in AA →UK:UK SS:PO TA:AJ AV SP:AJ OA:PR +F:HV +F:HV, the first +F:HV is correct, yet given a low score (0 whole rule, 1 bigram). The following and erroneous +F:HV is similarly given a low score. As above, such cases might be handled by looking for attachments in other rules (cf. Attardi and Ciaramita, 2007), but these cases should be relatively unproblematic for handcorrection, given the neighboring error. We also examined false negatives, i.e., errors with high scores. There are many examples of PR PA:NN rules, for instance, with the NN improp734 Score Thr. b a UASb LASb UASa LASa P R F1 F0.5 None n/a 5656 0 87.4% 82.0% 0% 0% 18.0% 100% 30.5% 21.5% Freq 0 1951 3705 76.6% 66.6% 93.1% 90.1% 33.4% 64.1% 43.9% 36.9% WR 0 894 4762 64.7% 54.0% 91.7% 87.3% 46.0% 40.5% 43.0% 44.8% 6 1478 4178 71.1% 60.9% 93.2% 89.5% 39.1% 56.9% 46.4% 41.7% Bi 0 56 5600 10.7% 7.1% 88.2% 82.8% 92.9% 5.1% 9.7% 21.0% 39 455 5201 51.6% 37.8% 90.6% 85.9% 62.2% 27.9% 38.5% 49.9% 431 1685 3971 74.1% 63.7% 93.1% 89.8% 36.3% 60.1% 45.2% 39.4% POS 0 54 5602 27.8% 22.2% 87.4% 82.6% 77.8% 4.1% 7.9% 17.0% 81.6 388 5268 48.5% 38.4% 90.3% 85.3% 61.6% 23.5% 34.0% 46.5% 763 1863 3793 75.4% 65.8% 93.3% 90.0% 34.2% 62.8% 44.3% 37.7% Table 1: MaltParser results for Talbanken, for select values (b = below, a = above threshold (Thr.)) erly attached, but there are also many correct instances of PR PA:NN. To sort out the errors, one needs to look at lexical knowledge and/or other dependencies in the tree. With so little context, frequent rules with only one dependent are not prime candidates for our methods of error detection. 5.4 Other corpora We now turn to the parsed data from three other corpora. The Alpino and Bosque corpora are approximately the same size as Talbanken, so we use the same thresholds for them. The DDT data is approximately half the size; to adjust, we simply halve the scores. In tables 2, 3, and 4, we present the results, using the best F0.5 and F1 settings from development. At a glance, we observe that the best method differs for each corpus and depending on an emphasis of precision or recall, with the bigram method generally having high precision. Score Thr. b LASb LASa P R None n/a 5585 73.8% 0% 26.2% 100% Freq 0 1174 43.2% 81.9% 56.8% 45.6% WR 0 483 32.5% 77.7% 67.5% 22.3% 6 787 39.4% 79.4% 60.6% 32.6% Bi 39 253 33.6% 75.7% 66.4% 11.5% 431 845 45.6% 78.8% 54.4% 31.4% POS 81.6 317 51.7% 75.1% 48.3% 10.5% 763 1767 53.5% 83.2% 46.5% 56.1% Table 2: MaltParser results for Alpino For Alpino, error detection is better with frequency than, for example, bigram scores. This is likely due to the fact that Alpino has the smallest label set of any of the corpora, with only 24 dependency labels and 12 POS tags (cf. 64 and 41 in Talbanken, respectively). With a smaller label set, there are less possible bigrams that could be anomalous, but more reliable statistics about a Score Thr. b LASb LASa P R None n/a 5867 82.2% 0% 17.8% 100% Freq 0 1561 61.2% 89.9% 38.8% 58.1% WR 0 693 48.1% 86.8% 51.9% 34.5% 6 1074 54.4% 88.5% 45.6% 47.0% Bi 39 227 15.4% 84.9% 84.6% 18.4% 431 776 51.0% 87.0% 49.0% 36.5% POS 81.6 369 33.3% 85.5% 66.7% 23.6% 763 1681 60.1% 91.1% 39.9% 64.3% Table 3: MaltParser results for Bosque Score Thr. b LASb LASa P R None n/a 5852 81.0% 0% 19.0% 100% Freq 0 1835 65.9% 88.0% 34.1% 56.4% WR 0 739 53.9% 85.0% 46.1% 30.7% 3 1109 60.1% 85.9% 39.9% 39.9% Bi 19.5 185 25.4% 82.9% 74.6% 12.4% 215.5 884 56.8% 85.4% 43.2% 34.4% POS 40.8 179 30.2% 82.7% 69.8% 11.3% 381.5 1214 62.5% 85.9% 37.5% 41.0% Table 4: MaltParser results for DDT whole rule. Likewise, with fewer possible POS tag pairs, Alpino has lower precision for the lowthreshold POS scores than the other corpora. For the whole rule scores, the DDT data is worse (compare its 46.1% precision with Bosque’s 45.6%, with vastly different recall values), which could be due to the smaller training data. One might also consider the qualitative differences in the dependency inventory of DDT compared to the others—e.g., appositions, distinctions in names, and more types of modifiers. 5.5 MSTParser Turning to the results of running the methods on the output of MSTParser, we find similar but slightly worse values for the whole rule and bigram methods, as shown in tables 5-8. What is 735 most striking are the differences in the POS-based method for Bosque and DDT (tables 7 and 8), where a large percentage of the test corpus is underneath the threshold. MSTParser is apparently positing fewer distinct head-dependent pairs, as most of them fall under the given thresholds. With the exception of the POS-based method for DDT (where LASb is actually higher than LASa) the different methods seem to be accurate enough to be used as part of corpus post-editing. Score Thr. b LASb LASa P R None n/a 5656 81.1% 0% 18.9% 100% Freq 0 3659 65.2% 89.7% 34.8% 64.9% WR 0 4740 55.7% 86.0% 44.3% 37.9% 6 4217 59.9% 88.3% 40.1% 53.9% Bi 39 5183 38.9% 84.9% 61.1% 27.0% 431 3997 63.2% 88.5% 36.8% 57.1% POS 81.6 327 42.8% 83.4% 57.2% 17.5% 763 1764 68.0% 87.0% 32.0% 52.7% Table 5: MSTParser results for Talbanken Score Thr. b LASb LASa P R None n/a 5585 75.4% 0% 24.6% 100% Freq 0 1371 49.5% 83.9% 50.5% 50.5% WR 0 453 40.0% 78.5% 60.0% 19.8% 6 685 45.4% 79.6% 54.6% 27.2% Bi 39 226 39.8% 76.9% 60.2% 9.9% 431 745 48.2% 79.6% 51.8% 28.1% POS 81.6 570 60.4% 77.1% 39.6% 16.5% 763 1860 61.9% 82.1% 38.1% 51.6% Table 6: MSTParser results for Alpino Score Thr. b LASb LASa P R None n/a 5867 82.5% 0% 17.5% 100% Freq 0 1562 63.9% 89.3% 36.1% 55.0% WR 0 540 50.6% 85.8% 49.4% 26.0% 6 985 58.0% 87.5% 42.0% 40.4% Bi 39 117 34.2% 83.5% 65.8% 7.5% 431 736 56.4% 86.3% 43.6% 31.3% POS 81.6 2978 75.8% 89.4% 24.2% 70.3% 763 3618 74.3% 95.8% 25.7% 90.7% Table 7: MSTParser results for Bosque Score Thr. b LASb LASa P R None n/a 5852 82.9% 0% 17.1% 100% Freq 0 1864 70.3% 88.8% 29.7% 55.3% WR 0 624 60.6% 85.6% 39.4% 24.6% 3 1019 65.4% 86.6% 34.6% 35.3% Bi 19.5 168 28.6% 84.5% 71.4% 12.0% 215.5 839 61.6% 86.5% 38.4% 32.2% POS 40.8 5714 83.0% 79.0% 17.0% 97.1% 381.5 5757 82.9% 80.0% 17.1% 98.1% Table 8: MSTParser results for DDT 6 Summary and Outlook We have proposed different methods for flagging the errors in automatically-parsed corpora, by treating the problem as one of looking for anomalous rules with respect to a treebank grammar. The different methods incorporate differing types and amounts of information, notably comparisons among dependency rules and bigrams within such rules. Using these methods, we demonstrated success in sorting well-formed output from erroneous output across language, corpora, and parsers. Given that the rule representations and comparison methods use both POS and dependency information, a next step in evaluating and improving the methods is to examine automatically POStagged data. Our methods should be able to find POS errors in addition to dependency errors. Furthermore, although we have indicated that differences in accuracy can be linked to differences in the granularity and particular distinctions of the annotation scheme, it is still an open question as to which methods work best for which schemes and for which constructions (e.g., coordination). Acknowledgments Thanks to Sandra K¨ubler and Amber Smith for comments on an earlier draft; Yvonne Samuelsson for help with the Swedish translations; the IU Computational Linguistics discussion group for feedback; and Julia Hockenmaier, Chris Brew, and Rebecca Hwa for discussion on the general topic. A Some Talbanken05 categories POS tags ++ coord. conj. AB adverb AJ adjective AV vara (be) EN indef. article HV ha(va) (have) ID part of idiom IG punctuation IR parenthesis NN noun PO pronoun PR preposition RO numeral QV kunna (can) SV skola (will) UK sub. conj. VN verbal noun VV verb XX unclassifiable Dependencies ++ coord. conj. +F main clause coord. AA adverbial AN apposition AT nomainl pre-modifier DT determiner ET nominal post-modifier HD head IG punctuation IR parenthesis JR second parenthesis KA comparative adverbial MA attitude adverbial NA negation adverbial OO object PA preposition comp. PL verb particle SS subject TA time adverbial UK sub. conj. VG verb group XX unclassifiable 736 References Afonso, Susana, Eckhard Bick, Renato Haber and Diana Santos (2002). Floresta Sint´a(c)tica: a treebank for Portuguese. In Proceedings of LREC 2002. Las Palmas, pp. 1698–1703. Attardi, Giuseppe and Massimiliano Ciaramita (2007). Tree Revision Learning for Dependency Parsing. In Proceedings of NAACL-HLT-07. Rochester, NY, pp. 388–395. Bick, Eckhard (2007). Hybrid Ways to Improve Domain Independence in an ML Dependency Parser. In Proceedings of the CoNLL Shared Task Session of EMNLP-CoNLL 2007. Prague, Czech Republic, pp. 1119–1123. Boyd, Adriane, Markus Dickinson and Detmar Meurers (2008). On Detecting Errors in Dependency Treebanks. Research on Language and Computation 6(2), 113–137. Buchholz, Sabine and Erwin Marsi (2006). CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL-X. New York City, pp. 149–164. Campbell, David and Stephen Johnson (2002). A transformational-based learner for dependency grammars in discharge summaries. In Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain. Phildadelphia, pp. 37–44. Dickinson, Markus (2008). Ad Hoc Treebank Structures. In Proceedings of ACL-08. Columbus, OH. Dickinson, Markus and Jennifer Foster (2009). Similarity Rules! Exploring Methods for AdHoc Rule Detection. In Proceedings of TLT-7. Groningen, The Netherlands. Gildea, Daniel (2001). Corpus Variation and Parser Performance. In Proceedings of EMNLP-01. Pittsburgh, PA. Hall, Keith and V´aclav Nov´ak (2005). Corrective Modeling for Non-Projective Dependency Parsing. In Proceedings of IWPT-05. Vancouver, pp. 42–52. Kromann, Matthias Trautner (2003). The Danish Dependency Treebank and the underlying linguistic theory. In Proceedings of TLT-03. Kuhlmann, Marco and Giorgio Satta (2009). Treebank Grammar Techniques for Non-Projective Dependency Parsing. In Proceedings of EACL09. Athens, Greece, pp. 478–486. Loftsson, Hrafn (2009). Correcting a POS-Tagged Corpus Using Three Complementary Methods. In Proceedings of EACL-09. Athens, Greece, pp. 523–531. McDonald, Ryan and Fernando Pereira (2006). Online learning of approximate dependency parsing algorithms. In Proceedings of EACL06. Trento. Nilsson, Jens and Johan Hall (2005). Reconstruction of the Swedish Treebank Talbanken. MSI report 05067, V¨axj¨o University: School of Mathematics and Systems Engineering. Nivre, Joakim, Johan Hall, Jens Nilsson, Atanas Chanev, Gulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov and Erwin Marsi (2007). MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering 13(2), 95–135. Owczarzak, Karolina (2009). DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries. In Proceedings of ACL-AFNLP-09. Suntec, Singapore, pp. 190–198. Przepi´orkowski, Adam (2006). What to acquire from corpora in automatic valence acquisition. In Violetta Koseska-Toszewa and Roman Roszko (eds.), Semantyka a konfrontacja jezykowa, tom 3, Warsaw: Slawistyczny O´srodek Wydawniczy PAN, pp. 25–41. Sekine, Satoshi (1997). The Domain Dependence of Parsing. In Proceedings of ANLP-96. Washington, DC. van der Beek, Leonoor, Gosse Bouma, Robert Malouf and Gertjan van Noord (2002). The Alpino Dependency Treebank. In Proceedings of CLIN 2001. Rodopi. van Noord, Gertjan and Gosse Bouma (2009). Parsed Corpora for Linguistics. In Proceedings of the EACL 2009 Workshop on the Interaction between Linguistics and Computational Linguistics: Virtuous, Vicious or Vacuous?. Athens, pp. 33–39. Wallis, Sean (2003). Completing Parsed Corpora. In Anne Abeill´e (ed.), Treebanks: Building and using syntactically annoted corpora, Dordrecht: Kluwer Academic Publishers, pp. 61–71. Wan, Stephen, Mark Dras, Robert Dale and C´ecile Paris (2009). Improving Grammaticality in Sta737 tistical Sentence Generation: Introducing a Dependency Spanning Tree Algorithm with an Argument Satisfaction Model. In Proceedings of EACL-09. Athens, Greece, pp. 852–860. Xu, Peng, Jaeho Kang, Michael Ringgaard and Franz Och (2009). Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proceedings of NAACL-HLT-09. Boulder, Colorado, pp. 245–253. 738
2010
75
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 739–748, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Boosting-based System Combination for Machine Translation Tong Xiao, Jingbo Zhu, Muhua Zhu, Huizhen Wang Natural Language Processing Lab. Northeastern University, China {xiaotong,zhujingbo,wanghuizhen}@mail.neu.edu.cn [email protected] Abstract In this paper, we present a simple and effective method to address the issue of how to generate diversified translation systems from a single Statistical Machine Translation (SMT) engine for system combination. Our method is based on the framework of boosting. First, a sequence of weak translation systems is generated from a baseline system in an iterative manner. Then, a strong translation system is built from the ensemble of these weak translation systems. To adapt boosting to SMT system combination, several key components of the original boosting algorithms are redesigned in this work. We evaluate our method on Chinese-to-English Machine Translation (MT) tasks in three baseline systems, including a phrase-based system, a hierarchical phrasebased system and a syntax-based system. The experimental results on three NIST evaluation test sets show that our method leads to significant improvements in translation accuracy over the baseline systems. 1 Introduction Recent research on Statistical Machine Translation (SMT) has achieved substantial progress. Many SMT frameworks have been developed, including phrase-based SMT (Koehn et al., 2003), hierarchical phrase-based SMT (Chiang, 2005), syntax-based SMT (Eisner, 2003; Ding and Palmer, 2005; Liu et al., 2006; Galley et al., 2006; Cowan et al., 2006), etc. With the emergence of various structurally different SMT systems, more and more studies are focused on combining multiple SMT systems for achieving higher translation accuracy rather than using a single translation system. The basic idea of system combination is to extract or generate a translation by voting from an ensemble of translation outputs. Depending on how the translation is combined and what voting strategy is adopted, several methods can be used for system combination, e.g. sentence-level combination (Hildebrand and Vogel, 2008) simply selects one from original translations, while some more sophisticated methods, such as wordlevel and phrase-level combination (Matusov et al., 2006; Rosti et al., 2007), can generate new translations differing from any of the original translations. One of the key factors in SMT system combination is the diversity in the ensemble of translation outputs (Macherey and Och, 2007). To obtain diversified translation outputs, most of the current system combination methods require multiple translation engines based on different models. However, this requirement cannot be met in many cases, since we do not always have the access to multiple SMT engines due to the high cost of developing and tuning SMT systems. To reduce the burden of system development, it might be a nice way to combine a set of translation systems built from a single translation engine. A key issue here is how to generate an ensemble of diversified translation systems from a single translation engine in a principled way. Addressing this issue, we propose a boostingbased system combination method to learn a combined translation system from a single SMT engine. In this method, a sequence of weak translation systems is generated from a baseline system in an iterative manner. In each iteration, a new weak translation system is learned, focusing more on the sentences that are relatively poorly translated by the previous weak translation system. Finally, a strong translation system is built from the ensemble of the weak translation systems. Our experiments are conducted on Chinese-toEnglish translation in three state-of-the-art SMT systems, including a phrase-based system, a hierarchical phrase-based system and a syntax-based 739 Input: a model u, a sequence of (training) samples {(f1, r1), ..., (fm, rm)} where fi is the i-th source sentence, and ri is the set of reference translations for fi. Output: a new translation system Initialize: D1(i) = 1 / m for all i = 1, ..., m For t = 1, ..., T 1. Train a translation system u(λ* t) on {(fi, ri)} using distribution Dt 2. Calculate the error rate tε of u(λ* t) on {(fi, ri)} 3. Set 1 1 ln( ) 2 t t t ε α ε + = (3) 4. Update weights 1 ( ) ( ) t il t t t D i e D i Z α ⋅ + = (4) where li is the loss on the i-th training sample, and Zt is the normalization factor. Output the final system: v(u(λ* 1), ..., u (λ* T)) Figure 1: Boosting-based System Combination system. All the systems are evaluated on three NIST MT evaluation test sets. Experimental results show that our method leads to significant improvements in translation accuracy over the baseline systems. 2 Background Given a source string f, the goal of SMT is to find a target string e* by the following equation. * argmax(Pr( | )) e e e f = (1) where Pr( | ) e f is the probability that e is the translation of the given source string f. To model the posterior probability Pr( | ) e f , most of the state-of-the-art SMT systems utilize the loglinear model proposed by Och and Ney (2002), as follows, 1 ' 1 exp( ( , )) Pr( | ) exp( ( , ')) M m m m M m m e m h f e e f h f e λ λ = = ⋅ = ⋅ ∑ ∑ ∑ (2) where {hm( f, e ) | m = 1, ..., M} is a set of features, and λm is the feature weight corresponding to the m-th feature. hm( f, e ) can be regarded as a function that maps every pair of source string f and target string e into a non-negative value, and λm can be viewed as the contribution of hm( f, e ) to the overall score Pr( | ) e f . In this paper, u denotes a log-linear model that has M fixed features {h1( f ,e ), ..., hM( f ,e )}, λ = {λ1, ..., λM} denotes the M parameters of u, and u(λ) denotes a SMT system based on u with parameters λ. Generally, λ is trained on a training data set1 to obtain an optimized weight vector λ* and consequently an optimized system u(λ*). 3 Boosting-based System Combination for Single Translation Engine Suppose that there are T available SMT systems {u1(λ* 1), ..., uT(λ* T)}, the task of system combination is to build a new translation system v(u1(λ* 1), ..., uT(λ* T)) from {u1(λ* 1), ..., uT(λ* T)}. Here v(u1(λ* 1), ..., uT(λ* T)) denotes the combination system which combines translations from the ensemble of the output of each ui(λ* i). We call ui(λ* i) a member system of v(u1(λ* 1), ..., uT(λ* T)). As discussed in Section 1, the diversity among the outputs of member systems is an important factor to the success of system combination. To obtain diversified member systems, traditional methods concentrate more on using structurally different member systems, that is u1≠ u2 ≠...≠ uT. However, this constraint condition cannot be satisfied when multiple translation engines are not available. In this paper, we argue that the diversified member systems can also be generated from a single engine u(λ*) by adjusting the weight vector λ* in a principled way. In this work, we assume that u1 = u2 =...= uT = u. Our goal is to find a series of λ* i and build a combined system from {u(λ* i)}. To achieve this goal, we propose a 1 The data set used for weight training is generally called development set or tuning set in the SMT field. In this paper, we use the term training set to emphasize the training of log-linear model. 740 boosting-based system combination method (Figure 1). Like other boosting algorithms, such as AdaBoost (Freund and Schapire, 1997; Schapire, 2001), the basic idea of this method is to use weak systems (member systems) to form a strong system (combined system) by repeatedly calling weak system trainer on different distributions over the training samples. However, since most of the boosting algorithms are designed for the classification problem that is very different from the translation problem in natural language processing, several key components have to be redesigned when boosting is adapted to SMT system combination. 3.1 Training In this work, Minimum Error Rate Training (MERT) proposed by Och (2003) is used to estimate feature weights λ over a series of training samples. As in other state-of-the-art SMT systems, BLEU is selected as the accuracy measure to define the error function used in MERT. Since the weights of training samples are not taken into account in BLEU2, we modify the original definition of BLEU to make it sensitive to the distribution Dt(i) over the training samples. The modified version of BLEU is called weighted BLEU (WBLEU) in this paper. Let E = e1 ... em be the translations produced by the system, R = r1 ... rm be the reference translations where ri = {ri1, ..., riN}, and Dt(i) be the weight of the i-th training sample (fi, ri). The weighted BLEU metric has the following form: { } ( ) 1 1 1 1 1 1/ 4 m 4 1 1 m 1 1 WBLEU( , ) ( ) min | ( ) | exp 1 max 1, ( ) | ( ) | ( ) ( ) ( ) (5) ( ) ( ) m ij t i j N m i t i N i ij t n n i j i n t n i E R D i g r D i g e D i g e g r D i g e = ≤≤ = = = = = ⎛ ⎞ ⎧ ⎫ ⎪ ⎪ ⎜ ⎟ = − × ⎨ ⎬ ⎜ ⎟ ⎜ ⎟ ⎪ ⎪ ⎩ ⎭ ⎝ ⎠ ⎛ ⎞ ⎜ ⎟ ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ∑ ∑ ∑ ∏ ∑ I U where ( ) n g s is the multi-set of all n-grams in a string s. In this definition, n-grams in ei and {rij} are weighted by Dt(i). If the i-th training sample has a larger weight, the corresponding n-grams will have more contributions to the overall score WBLEU( , ) E R . As a result, the i-th training sample gains more importance in MERT. Obvi 2 In this paper, we use the NIST definition of BLEU where the effective reference length is the length of the shortest reference translation. ously the original BLEU is just a special case of WBLEU when all the training samples are equally weighted. As the weighted BLEU is used to measure the translation accuracy on the training set, the error rate is defined to be: 1 WBLEU( , ) t E R ε = − (6) 3.2 Re-weighting Another key point is the maintaining of the distribution Dt(i) over the training set. Initially all the weights of training samples are set equally. On each round, we increase the weights of the samples that are relatively poorly translated by the current weak system so that the MERT-based trainer can focus on the hard samples in next round. The update rule is given in Equation 4 with two parameters t α and li in it. t α can be regarded as a measure of the importance that the t-th weak system gains in boosting. The definition of t α guarantees that t α always has a positive value3. A main effect of t α is to scale the weight updating (e.g. a larger t α means a greater update). li is the loss on the i-th sample. For each i, let {ei1, ..., ein} be the n-best translation candidates produced by the system. The loss function is defined to be: * 1 1 BLEU( , ) BLEU( , ) k i i i ij i j l e e k = = −∑ r r (7) where BLEU(eij, ri) is the smoothed sentence-level BLEU score (Liang et al., 2006) of the translation e with respect to the reference translations ri, and ei * is the oracle translation which is selected from {ei1, ..., ein} in terms of BLEU(eij, ri). li can be viewed as a measure of the average cost that we guess the top-k translation candidates instead of the oracle translation. The value of li counts for the magnitude of weight update, that is, a larger li means a larger weight update on Dt(i). The definition of the loss function here is similar to the one used in (Chiang et al., 2008) where only the top-1 translation candidate (i.e. k = 1) is taken into account. 3.3 System Combination Scheme In the last step of our method, a strong translation system v(u(λ* 1), ..., u(λ* T)) is built from the 3 Note that the definition of t α here is different from that in the original AdaBoost algorithm (Freund and Schapire, 1997; Schapire, 2001) where t α is a negative number when 0.5 tε > . 741 ensemble of member systems {u(λ* 1), ..., u(λ* T)}. In this work, a sentence-level combination method is used to select the best translation from the pool of the n-best outputs of all the member systems. Let H(u(λ* t)) (or Ht for short) be the set of the n-best translation candidates produced by the t-th member system u(λ* t), and H(v) be the union set of all Ht (i.e. ( ) t H v H = U ). The final translation is generated from H(v) based on the following scoring function: * 1 ( ) argmax ( ) ( , ( )) T t t t e H v e e e H v β φ ψ = ∈ = ⋅ + ∑ (8) where ( ) t e φ is the log-scaled model score of e in the t-th member system, and t β is the corresponding feature weight. It should be noted that i e H ∈ may not exist in any 'i i H ≠. In this case, we can still calculate the model score of e in any other member systems, since all the member systems are based on the same model and share the same feature space. ( , ( )) e H v ψ is a consensusbased scoring function which has been successfully adopted in SMT system combination (Duan et al., 2009; Hildebrand and Vogel, 2008; Li et al., 2009). The computation of ( , ( )) e H v ψ is based on a linear combination of a set of n-gram consensuses-based features. ( , ( )) ( , ( )) n n n e H v h e H v ψ θ + + = ⋅ + ∑ ( , ( )) n n n h e H v θ − − ⋅ ∑ (9) For each order of n-gram, ( , ( )) nh e H v + and ( , ( )) nh e H v − are defined to measure the n-gram agreement and disagreement between e and other translation candidates in H(v), respectively. n θ + and n θ −are the feature weights corresponding to ( , ( )) nh e H v + and ( , ( )) nh e H v − . As ( , ( )) nh e H v + and ( , ( )) nh e H v − used in our work are exactly the same as the features used in (Duan et al., 2009) and similar to the features used in (Hildebrand and Vogel, 2008; Li et al., 2009), we do not present the detailed description of them in this paper. If p orders of n-gram are used in computing ( , ( )) e H v ψ , the total number of features in the system combination will be 2 T p + × (T modelscore-based features defined in Equation 8 and 2 p × consensus-based features defined in Equation 9). Since all these features are combined linearly, we use MERT to optimize them for the combination model. 4 Optimization If implemented naively, the translation speed of the final translation system will be very slow. For a given input sentence, each member system has to encode it individually, and the translation speed is inversely proportional to the number of member systems generated by our method. Fortunately, with the thought of computation, there are a number of optimizations that can make the system much more efficient in practice. A simple solution is to run member systems in parallel when translating a new sentence. Since all the member systems share the same data resources, such as language model and translation table, we only need to keep one copy of the required resources in memory. The translation speed just depends on the computing power of parallel computation environment, such as the number of CPUs. Furthermore, we can use joint decoding techniques to save the computation of the equivalent translation hypotheses among member systems. In joint decoding of member systems, the search space is structured as a translation hypergraph where the member systems can share their translation hypotheses. If more than one member systems share the same translation hypothesis, we just need to compute the corresponding feature values only once, instead of repeating the computation in individual decoders. In our experiments, we find that over 60% translation hypotheses can be shared among member systems when the number of member systems is over 4. This result indicates that promising speed improvement can be achieved by using the joint decoding and hypothesis sharing techniques. Another method to speed up the system is to accelerate n-gram language model with n-gram caching techniques. In this method, a n-gram cache is used to store the most frequently and recently accessed n-grams. When a new n-gram is accessed during decoding, the cache is checked first. If the required n-gram hits the cache, the corresponding n-gram probability is returned by the cached copy rather than refetching the original data in language model. As the translation speed of SMT system depends heavily on the computation of n-gram language model, the acceleration of n-gram language model generally leads to substantial speed-up of SMT system. In our implementation, the n-gram caching in general brings us over 30% speed improvement of the system. 742 5 Experiments Our experiments are conducted on Chinese-toEnglish translation in three SMT systems. 5.1 Baseline Systems The first SMT system is a phrase-based system with two reordering models including the maximum entropy-based lexicalized reordering model proposed by Xiong et al. (2006) and the hierarchical phrase reordering model proposed by Galley and Manning (2008). In this system all phrase pairs are limited to have source length of at most 3, and the reordering limit is set to 8 by default4. The second SMT system is an in-house reimplementation of the Hiero system which is based on the hierarchical phrase-based model proposed by Chiang (2005). The third SMT system is a syntax-based system based on the string-to-tree model (Galley et al., 2006; Marcu et al., 2006), where both the minimal GHKM and SPMT rules are extracted from the bilingual text, and the composed rules are generated by combining two or three minimal GHKM and SPMT rules. Synchronous binarization (Zhang et al., 2006; Xiao et al., 2009) is performed on each translation rule for the CKYstyle decoding. In this work, baseline system refers to the system produced by the boosting-based system combination when the number of iterations (i.e. T ) is set to 1. To obtain satisfactory baseline performance, we train each SMT system for 5 times using MERT with different initial values of feature weights to generate a group of baseline candidates, and then select the best-performing one from this group as the final baseline system (i.e. the starting point in the boosting process) for the following experiments. 5.2 Experimental Setup Our bilingual data consists of 140K sentence pairs in the FBIS data set5. GIZA++ is employed to perform the bi-directional word alignment between the source and target sentences, and the final word alignment is generated using the intersect-diag-grow method. All the word-aligned bilingual sentence pairs are used to extract phrases and rules for the baseline systems. A 5gram language model is trained on the target-side 4 Our in-house experimental results show that this system performs slightly better than Moses on Chinese-to-English translation tasks. 5 LDC catalog number: LDC2003E14 of the bilingual data and the Xinhua portion of English Gigaword corpus. Berkeley Parser is used to generate the English parse trees for the rule extraction of the syntax-based system. The data set used for weight training in boostingbased system combination comes from NIST MT03 evaluation set. To speed up MERT, all the sentences with more than 20 Chinese words are removed. The test sets are the NIST evaluation sets of MT04, MT05 and MT06. The translation quality is evaluated in terms of case-insensitive NIST version BLEU metric. Statistical significant test is conducted using the bootstrap resampling method proposed by Koehn (2004). Beam search and cube pruning (Huang and Chiang, 2007) are used to prune the search space in all the three baseline systems. By default, both of the beam size and the size of n-best list are set to 20. In the settings of boosting-based system combination, the maximum number of iterations is set to 30, and k (in Equation 7) is set to 5. The ngram consensuses-based features (in Equation 9) used in system combination ranges from unigram to 4-gram. 5.3 Evaluation of Translations First we investigate the effectiveness of the boosting-based system combination on the three systems. Figures 2-5 show the BLEU curves on the development and test sets, where the X-axis is the iteration number, and the Y-axis is the BLEU score of the system generated by the boostingbased system combination. The points at iteration 1 stand for the performance of the baseline systems. We see, first of all, that all the three systems are improved during iterations on the development set. This trend also holds on the test sets. After 5, 7 and 8 iterations, relatively stable improvements are achieved by the phrase-based system, the Hiero system and the syntax-based system, respectively. The BLEU scores tend to converge to the stable values after 20 iterations for all the systems. Figures 2-5 also show that the boosting-based system combination seems to be more helpful to the phrase-based system than to the Hiero system and the syntax-based system. For the phrase-based system, it yields over 0.6 BLEU point gains just after the 3rd iteration on all the data sets. Table 1 summarizes the evaluation results, where the BLEU scores at iteration 5, 10, 15, 20 and 30 are reported for the comparison. We see that the boosting-based system method stably ac- 743 33 34 35 36 37 38 0 5 10 15 20 25 30 BLEU4[%] iteration number BLEU on MT03 (dev.) phrase-based hiero syntax-based Figure 2: BLEU scores on the development set 33 34 35 36 37 38 0 5 10 15 20 25 30 BLEU4[%] iteration number BLEU on MT04 (test) phrase-based hiero syntax-based Figure 3: BLEU scores on the test set of MT04 32 33 34 35 36 37 0 5 10 15 20 25 30 BLEU4[%] iteration number BLEU on MT05 (test) phrase-based hiero syntax-based Figure 4: BLEU scores on the test set of MT05 30 31 32 33 34 35 0 5 10 15 20 25 30 BLEU4[%] iteration number BLEU on MT06 (test) phrase-based hiero syntax-based Figure 5: BLEU scores on the test set of MT06 Phrase-based Hiero Syntax-based Dev. MT04 MT05 MT06 Dev. MT04 MT05 MT06 Dev. MT04 MT05 MT06 Baseline 33.21 33.68 32.68 30.59 33.42 34.30 33.24 30.62 35.84 35.71 35.11 32.43 Baseline+600best 33.32 33.93 32.84 30.76 33.48 34.46 33.39 30.75 35.95 35.88 35.23 32.58 Boosting-5Iterations 33.95* 34.32* 33.33* 31.33* 33.73 34.48 33.44 30.83 36.03 35.92 35.27 33.09 Boosting-10Iterations 34.14* 34.68* 33.42* 31.35* 33.75 34.65 33.75* 31.02 36.14 36.39* 35.47 33.15* Boosting-15Iterations 33.99* 34.78* 33.46* 31.45* 34.03* 34.88* 33.98* 31.20* 36.36* 36.46* 35.53* 33.43* Boosting-20Iterations 34.09* 35.11* 33.56* 31.45* 34.17* 35.00* 34.04* 31.29* 36.44* 36.79* 35.77* 33.36* Boosting-30Iterations 34.12* 35.16* 33.76* 31.59* 34.05* 34.99* 34.05* 31.30* 36.52* 36.81* 35.71* 33.46* Table 1: Summary of the results (BLEU4[%]) on the development and test sets. * = significantly better than baseline (p < 0.05). hieves significant BLEU improvements after 15 iterations, and the highest BLEU scores are generally yielded after 20 iterations. Also as shown in Table 1, over 0.7 BLEU point gains are obtained on the phrase-based system after 10 iterations. The largest BLEU improvement on the phrase-based system is over 1 BLEU point in most cases. These results reflect that our method is relatively more effective for the phrase-based system than for the other two systems, and thus confirms the fact we observed in Figures 2-5. We also investigate the impact of n-best list size on the performance of baseline systems. For the comparison, we show the performance of the baseline systems with the n-best list size of 600 (Baseline+600best in Table 1) which equals to the maximum number of translation candidates accessed in the final combination system (combi- ne 30 member systems, i.e. Boosing-30Iterations). 744 15 20 25 30 35 40 0 5 10 15 20 25 30 Diversity (TER[%]) iteration number Diversity on MT03 (dev.) phrase-based hiero syntax-based Figure 6: Diversity on the development set 10 15 20 25 30 35 0 5 10 15 20 25 30 Diversity (TER[%]) iteration number Diversity on MT04 (test) phrase-based hiero syntax-based Figure 7: Diversity on the test set of MT04 15 20 25 30 35 0 5 10 15 20 25 30 Diversity (TER[%]) iteration number Diversity on MT05 (test) phrase-based hiero syntax-based Figure 8: Diversity on the test set of MT05 15 20 25 30 35 40 0 5 10 15 20 25 30 Diversity (TER[%]) iteration number Diversity on MT06 (test) phrase-based hiero syntax-based Figure 9: Diversity on the test set of MT06 As shown in Table 1, Baseline+600best obtains stable improvements over Baseline. It indicates that the access to larger n-best lists is helpful to improve the performance of baseline systems. However, the improvements achieved by Baseline+600best are modest compared to the improvements achieved by Boosting-30Iterations. These results indicate that the SMT systems can benefit more from the diversified outputs of member systems rather than from larger n-best lists produced by a single system. 5.4 Diversity among Member Systems We also study the change of diversity among the outputs of member systems during iterations. The diversity is measured in terms of the Translation Error Rate (TER) metric proposed in (Snover et al., 2006). A higher TER score means that more edit operations are performed if we transform one translation output into another translation output, and thus reflects a larger diversity between the two outputs. In this work, the TER score for a given group of member systems is calculated by averaging the TER scores between the outputs of each pair of member systems in this group. Figures 6-9 show the curves of diversity on the development and test sets, where the X-axis is the iteration number, and the Y-axis is the diversity. The points at iteration 1 stand for the diversities of baseline systems. In this work, the baseline’s diversity is the TER score of the group of baseline candidates that are generated in advance (Section 5.1). We see that the diversities of all the systems increase during iterations in most cases, though a few drops occur at a few points. It indicates that our method is very effective to generate diversified member systems. In addition, the diversities of baseline systems (iteration 1) are much lower 745 than those of the systems generated by boosting (iterations 2-30). Together with the results shown in Figures 2-5, it confirms our motivation that the diversified translation outputs can lead to performance improvements over the baseline systems. Also as shown in Figures 6-9, the diversity of the Hiero system is much lower than that of the phrase-based and syntax-based systems at each individual setting of iteration number. This interesting finding supports the observation that the performance of the Hiero system is relatively more stable than the other two systems as shown in Figures 2-5. The relative lack of diversity in the Hiero system might be due to the spurious ambiguity in Hiero derivations which generally results in very few different translations in translation outputs (Chiang, 2007). 5.5 Evaluation of Oracle Translations In this set of experiments, we evaluate the oracle performance on the n-best lists of the baseline systems and the combined systems generated by boosting-based system combination. Our primary goal here is to study the impact of our method on the upper-bound performance. Table 2 shows the results, where Baseline+600best stands for the top-600 translation candidates generated by the baseline systems, and Boosting-30iterations stands for the ensemble of 30 member systems’ top-20 translation candidates. As expected, the oracle performance of Boosting-30Iterations is significantly higher than that of Baseline+600best. This result indicates that our method can provide much “better” translation candidates for system combination than enlarging the size of n-best list naively. It also gives us a rational explanation for the significant improvements achieved by our method as shown in Section 5.3. Data Set Method Phrasebased Hiero Syntaxbased Baseline+600best 46.36 46.51 46.92 Dev. Boosting-30Iterations 47.78* 47.44* 48.70* Baseline+600best 43.94 44.52 46.88 MT04 Boosting-30Iterations 45.97* 45.47* 49.40* Baseline+600best 42.32 42.47 45.21 MT05 Boosting-30Iterations 44.82* 43.44* 47.02* Baseline+600best 39.47 39.39 40.52 MT06 Boosting-30Iterations 41.51* 40.10* 41.88* Table 2: Oracle performance of various systems. * = significantly better than baseline (p < 0.05). 6 Related Work Boosting is a machine learning (ML) method that has been well studied in the ML community (Freund, 1995; Freund and Schapire, 1997; Collins et al., 2002; Rudin et al., 2007), and has been successfully adopted in natural language processing (NLP) applications, such as document classification (Schapire and Singer, 2000) and named entity classification (Collins and Singer, 1999). However, most of the previous work did not study the issue of how to improve a single SMT engine using boosting algorithms. To our knowledge, the only work addressing this issue is (Lagarda and Casacuberta, 2008) in which the boosting algorithm was adopted in phrase-based SMT. However, Lagarda and Casacuberta (2008)’s method calculated errors over the phrases that were chosen by phrase-based systems, and could not be applied to many other SMT systems, such as hierarchical phrase-based systems and syntax-based systems. Differing from Lagarda and Casacuberta’s work, we are concerned more with proposing a general framework which can work with most of the current SMT models and empirically demonstrating its effectiveness on various SMT systems. There are also some other studies on building diverse translation systems from a single translation engine for system combination. The first attempt is (Macherey and Och, 2007). They empirically showed that diverse translation systems could be generated by changing parameters at early-stages of the training procedure. Following Macherey and Och (2007)’s work, Duan et al. (2009) proposed a feature subspace method to build a group of translation systems from various different sub-models of an existing SMT system. However, Duan et al. (2009)’s method relied on the heuristics used in feature sub-space selection. For example, they used the remove-one-feature strategy and varied the order of n-gram language model to obtain a satisfactory group of diverse systems. Compared to Duan et al. (2009)’s method, a main advantage of our method is that it can be applied to most of the SMT systems without designing any heuristics to adapt it to the specified systems. 7 Discussion and Future Work Actually the method presented in this paper is doing something rather similar to Minimum Bayes Risk (MBR) methods. A main difference lies in that the consensus-based combination method here does not model the posterior probability of each hypothesis (i.e. all the hypotheses are assigned an equal posterior probability when we calculate the consensus-based features). 746 Greater improvements are expected if MBR methods are used and consensus-based combination techniques smooth over noise in the MERT pipeline. In this work, we use a sentence-level system combination method to generate final translations. It is worth studying other more sophisticated alternatives, such as word-level and phrase-level system combination, to further improve the system performance. Another issue is how to determine an appropriate number of iterations for boosting-based system combination. It is especially important when our method is applied in the real-world applications. Our empirical study shows that the stable and satisfactory improvements can be achieved after 6-8 iterations, while the largest improvements can be achieved after 20 iterations. In our future work, we will study in-depth principled ways to determine the appropriate number of iterations for boosting-based system combination. 8 Conclusions We have proposed a boosting-based system combination method to address the issue of building a strong translation system from a group of weak translation systems generated from a single SMT engine. We apply our method to three state-ofthe-art SMT systems, and conduct experiments on three NIST Chinese-to-English MT evaluations test sets. The experimental results show that our method is very effective to improve the translation accuracy of the SMT systems. Acknowledgements This work was supported in part by the National Science Foundation of China (60873091) and the Fundamental Research Funds for the Central Universities (N090604008). The authors would like to thank the anonymous reviewers for their pertinent comments, Tongran Liu, Chunliang Zhang and Shujie Yao for their valuable suggestions for improving this paper, and Tianning Li and Rushan Chen for developing parts of the baseline systems. References David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proc. of ACL 2005, Ann Arbor, Michigan, pages 263270. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228. David Chiang, Yuval Marton and Philip Resnik. 2008. Online Large-Margin Training of Syntactic and Structural Translation Features. In Proc. of EMNLP 2008, Honolulu, pages 224-233. Michael Collins and Yoram Singer. 1999. Unsupervised Models for Named Entity Classification. In Proc. of EMNLP/VLC 1999, pages 100-110. Michael Collins, Robert Schapire and Yoram Singer. 2002. Logistic Regression, AdaBoost and Bregman Distances. Machine Learning, 48(3): 253-285. Brooke Cowan, Ivona Kučerová and Michael Collins. 2006. A discriminative model for tree-to-tree translation. In Proc. of EMNLP 2006, pages 232-241. Yuan Ding and Martha Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. In Proc. of ACL 2005, Ann Arbor, Michigan, pages 541-548. Nan Duan, Mu Li, Tong Xiao and Ming Zhou. 2009. The Feature Subspace Method for SMT System Combination. In Proc. of EMNLP 2009, pages 1096-1104. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proc. of ACL 2003, pages 205-208. Yoav Freund. 1995. Boosting a weak learning algorithm by majority. Information and Computation, 121(2): 256-285. Yoav Freund and Robert Schapire. 1997. A decisiontheoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119-139. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang and Ignacio Thayer. 2006. Scalable inferences and training of context-rich syntax translation models. In Proc. of ACL 2006, Sydney, Australia, pages 961-968. Michel Galley and Christopher D. Manning. 2008. A Simple and Effective Hierarchical Phrase Reordering Model. In Proc. of EMNLP 2008, Hawaii, pages 848-856. Almut Silja Hildebrand and Stephan Vogel. 2008. Combination of machine translation systems via hypothesis selection from combined n-best lists. In Proc. of the 8th AMTA conference, pages 254-261. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL 2007, Prague, Czech Republic, pages 144-151. 747 Philipp Koehn, Franz Och and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proc. of HLT-NAACL 2003, Edmonton, USA, pages 48-54. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proc. of EMNLP 2004, Barcelona, Spain, pages 388-395. Antonio Lagarda and Francisco Casacuberta. 2008. Applying Boosting to Statistical Machine Translation. In Proc. of the 12th EAMT conference, pages 88-96. Mu Li, Nan Duan, Dongdong Zhang, Chi-Ho Li and Ming Zhou. 2009. Collaborative Decoding: Partial Hypothesis Re-Ranking Using Translation Consensus between Decoders. In Proc. of ACL-IJCNLP 2009, Singapore, pages 585-592. Percy Liang, Alexandre Bouchard-Côté, Dan Klein and Ben Taskar. 2006. An end-to-end discriminative approach to machine translation. In Proc. of COLING/ACL 2006, pages 104-111. Yang Liu, Qun Liu and Shouxun Lin. 2006. Tree-toString Alignment Template for Statistical Machine Translation. In Proc. of ACL 2006, pages 609-616. Wolfgang Macherey and Franz Och. 2007. An Empirical Study on Computing Consensus Translations from Multiple Machine Translation Systems. In Proc. of EMNLP 2007, pages 986-995. Daniel Marcu, Wei Wang, Abdessamad Echihabi and Kevin Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proc. of EMNLP 2006, Sydney, Australia, pages 44-52. Evgeny Matusov, Nicola Ueffing and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment. In Proc. of EACL 2006, pages 33-40. Franz Och and Hermann Ney. 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. In Proc. of ACL 2002, Philadelphia, pages 295-302. Franz Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of ACL 2003, Japan, pages 160-167. Antti-Veikko Rosti, Spyros Matsoukas and Richard Schwartz. 2007. Improved Word-Level System Combination for Machine Translation. In Proc. of ACL 2007, pages 312-319. Cynthia Rudin, Robert Schapire and Ingrid Daubechies. 2007. Analysis of boosting algorithms using the smooth margin function. The Annals of Statistics, 35(6): 2723-2768. Robert Schapire and Yoram Singer. 2000. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135-168. Robert Schapire. The boosting approach to machine learning: an overview. 2001. In Proc. of MSRI Workshop on Nonlinear Estimation and Classification, Berkeley, CA, USA, pages 1-23. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proc. of the 7th AMTA conference, pages 223-231. Tong Xiao, Mu Li, Dongdong Zhang, Jingbo Zhu and Ming Zhou. 2009. Better Synchronous Binarization for Machine Translation. In Proc. of EMNLP 2009, Singapore, pages 362-370. Deyi Xiong, Qun Liu and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proc. of ACL 2006, Sydney, pages 521-528. Hao Zhang, Liang Huang, Daniel Gildea and Kevin Knight. 2006. Synchronous Binarization for Machine Translation. In Proc. of HLT-NAACL 2006, New York, USA, pages 256- 263. 748
2010
76
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 749–759, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Fine-grained Genre Classification using Structural Learning Algorithms Zhili Wu Centre for Translation Studies University of Leeds, UK [email protected] Katja Markert School of Computing University of Leeds, UK [email protected] Serge Sharoff Centre for Translation Studies University of Leeds, UK [email protected] Abstract Prior use of machine learning in genre classification used a list of labels as classification categories. However, genre classes are often organised into hierarchies, e.g., covering the subgenres of fiction. In this paper we present a method of using the hierarchy of labels to improve the classification accuracy. As a testbed for this approach we use the Brown Corpus as well as a range of other corpora, including the BNC, HGC and Syracuse. The results are not encouraging: apart from the Brown corpus, the improvements of our structural classifier over the flat one are not statistically significant. We discuss the relation between structural learning performance and the visual and distributional balance of the label hierarchy, suggesting that only balanced hierarchies might profit from structural learning. 1 Introduction Automatic genre identification (AGI) can be traced to the mid-1990s (Karlgren and Cutting, 1994; Kessler et al., 1997), but this research became much more active in recent years, partly because of the explosive growth of the Web, and partly because of the importance of making genre distinctions in NLP applications. In Information Retrieval, given the large number of web pages on any given topic, it is often difficult for the users to find relevant pages that are in the right genre (Vidulin et al., 2007). As for other applications, the accuracy of many tasks, such as machine translation, POS tagging (Giesbrecht and Evert, 2009) or identification of discourse relations (Webber, 2009) relies of defining the language model suitable for the genre of a given text. For example, the accuracy of POS tagging reaching 96.9% on newspaper texts drops down to 85.7% on forums (Giesbrecht and Evert, 2009), i.e., every seventh word in forums is tagged incorrectly. This interest in genres resulted in a proliferation of studies on corpus development of web genres and comparison of methods for AGI. The two corpora commonly used for this task are KI04 (Meyer zu Eissen and Stein, 2004) and Santinis (Santini, 2007). The best results reported for these corpora (with 10-fold cross-validation) reach 84.1% on KI-04 and 96.5% accuracy on Santinis (Kanaris and Stamatatos, 2009). In our research (Sharoff et al., 2010) we produced even better results on these two benchmarks (85.8% and 97.1%, respectively). However, this impressive accuracy is not realistic in vivo, i.e., in classifying web pages retrieved as a result of actual queries. One reason comes from the limited number of genres present in these two collections (eight genres in KI-04 and seven in Santinis). As an example, only front pages of online newspapers are listed in Santinis, but not actual newspaper articles, so once an article is retrieved, it cannot be assigned to any class at all. Another reason why the high accuracy is not useful concerns the limited number of sources in each collection, e.g., all FAQs in Santinis come from either a website with FAQs on hurricanes or another one with tax advice. In the end, a classifier built for FAQs on this training data relies on a high topic-genre correlation in this particular collection and fails to spot any other FAQs. There are other corpora, which are more diverse in the range of their genres, such as the fifteen genres of the Brown Corpus (Kuˇcera and Francis, 1967) or the seventy genres of the BNC (Lee, 2001), but because of the number of genres in them and the diversity of documents within each genre, the accuracy of prior work on these collections is much less impressive. For example, Karlgren and Cutting (1994) using linear discriminant analysis achieve an accuracy of 52% without us749 ing cross-validation (the entire Brown Corpus was used as both the test set and training set), with the accuracy improving to 65% when the 15 genres are collapsed into 10, and to 73% with only 4 genres (Figure 1). This result suggests the importance of the hierarchy of genres. Firstly, making a decision on higher levels might be easier than on lower levels (fiction or non-fiction rather than science fiction or mystery). Secondly, we might be able to improve the accuracy on lower levels, by taking into account the relevant position of each node in the hierarchy (distinguishing between reportage or editorial becomes easier when we know they are safely under the category of press). Figure 1: Hierarchy of Brown corpus. This paper explores a way of using information on the hierarchy of labels for improving fine-grained genre classification. To the best of our knowledge, this is the first work presenting structural genre classification and distance measures for genres. In Section 2 we present a structural reformulation of Support Vector Machines (SVMs) that can take similarities between different genres into account. This formulation necessitates the development of distance measures between different genres in a hierarchy, of which we present three different types in Section 3, along with possible estimation procedures for these distances. We present experiments with these novel structural SVMs and distance measures on three different corpora in Section 4. Our experiments show that structural SVMs can outperform the non-structural standard. However, the improvement is only statistically significant on the Brown corpus. In Section 5 we investigate potential reasons for this, including the (im)balance of different genre hierarchies and problems with our distance measures. 2 Structural SVMs Discriminative methods are often used for classification, with SVMs being a well-performing method in many tasks (Boser et al., 1992; Joachims, 1999). Linear SVMs on a flat list of labels achieve high efficiency and accuracy in text classification when compared to nonlinear SVMs or other state-of-the-art methods. As for structural output learning, a few SVM-based objective functions have been proposed, including margin formulation for hierarchical learning (Dekel et al., 2004) or general structural learning (Joachims et al., 2009; Tsochantaridis et al., 2005). But many implementations are not publicly available, and their scalability to real-life text classification tasks is unknown. Also they have not been applied to genre classification. Our formulation can be taken as a special instance of the structural learning framework in (Tsochantaridis et al., 2005). However, they concentrate on more complicated label structures as for sequence alignment or parsing. They proposed two formulations, slack-rescaling and marginrescaling, claiming that margin-rescaling has two disadvantages. First, it potentially gives significant weight to output values that might not be easily confused with the target values, because every increase in the loss increases the required margin. However, they did not provide empirical evidence for this claim. Second, margin rescaling is not necessarily invariant to the scaling of the distance matrix. We still used margin-rescaling because it allows us to use the sequential dual method for large-scale implementation (Keerthi et al., 2008), which is not applicable to the slack-rescaling formulation. For web page classification we will need fast processing. In addition, we performed model calibration to address the second disadvantage (distance matrix invariance). Let x be a document and wm a weight vector associated with the genre class m in a corpus with k genres at the most fine-grained level. The predicted class is the class achieving the maximum inner product between x and the weight vector for the class, denoted as, arg max m wT mx, ∀m. (1) 750 Accurate prediction requires that when a document vector is multiplied with the weight vector associated with its own class, the resulting inner product should be larger than its inner products with a weight vector for any other genre class m. This helps us to define criteria for weight vectors. Let xi be the i−th training document, and yi its genre label. For its weight vector wyi, the inner product wT yixi should be larger than all other products wT mxi, that is, wT yixi −wT mxi ≥0, ∀m. (2) To strengthen the constraints, the zero value on the right hand side of the inequality for the flat SVM can be replaced by a positive value, corresponding to a distance measure h(yi, m) between two genre classes, leading to the following constraint: wT yixi −wT mxi ≥h(yi, m), ∀m. (3) To allow feasible models, in real scenarios such constraints can be violated, but the degree of violation is expected to be small. For each document, the maximum violation in the k constraints is of interest, as given by the following loss term: Lossi = max m {h(yi, m) −wT yixi + wT mxi}. (4) Adding up all loss terms over all training documents, and further introducing a term to penalize large values in the weight vectors, we have the following objective function (C is a user-specified nonnegative parameter). min m,i : 1 2 k X m=1 wT mwm + C p X i=1 Lossi. (5) Efficient methods can be derived by borrowing the sequential dual methods in (Keerthi et al., 2008) or other optimization techniques (Crammer and Singer, 2002). 3 Genre Distance Measures The structural SVM (Section 2) requires a distance measure h between two genres. We can derive such distance measures from the genre hierarchy in a way similar to word similarity measures that were invented for lexical hierarchies such as WordNet (see (Pedersen et al., 2007) for an overview). In the following, we will first shortly summarise path-based and information-based measures for similarity. However, information-based measures are based on the information content of a node in a hierarchy. Whereas the information content of a word or concept in a lexical hierarchy has been well-defined (Resnik, 1995), it is less clear how to estimate the information content of a genre label. We will therefore discuss several different ways of estimating information content of nodes in a genre hierarchy. 3.1 Distance Measures based on Path Length If genre labels are organised into a tree (Figure 1), one of the simplest ways to measure distance between two genre labels (= tree nodes) is path length (h(a, b)plen): f(a, LCS(a, b)) + f(b, LCS(a, b)), (6) where a and b are two nodes in the tree, LCS(a, b) is their Least Common Subsumer, and f(a, LCS(a, b)) is the number of levels passed through when traversing from a to the ancestral node LCS(a, b). In other words, the distance counts the number of edges traversed from nodes a to b in the tree. For example, the distance between Learned and Misc in Figure 1 would be 3. As an alternative, the maximum path length h(a, b)pmax to their least common subsumer can be used to reduce the range of possible values: max{f(a, LCS(a, b)), f(b, LCS(a, b))}. (7) The Leacock & Chodorow similarity measure (Leacock and Chodorow, 1998) normalizes the path length measure (6) by the maximum number of nodes D when traversing down from the root. s(a, b)plsk = −log((h(a, b)plen + 1)/2D). (8) To convert it into a distance measure, we can invert it h(a, b)plsk = 1/s(a, b)plsk. Other path-length based measures include the Wu & Palmer Similarity (Wu and Palmer, 1994). s(a, b)pwupal = 2f(R, LCS(a, b)) (f(R, a) + f(R, b)), (9) where R describes the hierarchy’s root node. Here similarity is proportional to the shared path from the root to the least common subsumer of two nodes. Since the Wu & Palmer similarity is always between [0 1), we can convert it into a distance measure by h(a, b)pwupal = 1 −s(a, b)pwupal. 751 3.2 Distance Measures based on Information Content Path-based distance measures work relatively well on balanced hierarchies such as the one in Figure 1 but fail to treat hierarchies with different levels of granularity well. For lexical hierarchies, as a result, several distance measures based on information content have been suggested where the information content of a concept c in a hierarchy is measured by (Resnik, 1995) IC(c) = −log( freq(c) freq(root)). (10) The frequency freq of a concept c is the sum of the frequency of the node c itself and the frequencies of all its subnodes. Since the root may be a dummy concept, its frequency is simply the sum of the frequencies of all its subnodes. The similarity between two nodes can then be defined as the information content of their least common subsumer: s(a, b)resk = IC(LCS(a, b)). (11) If two nodes just share the root as their subsumer, their similarity will be zero. To convert 11 into a distance measure, it is possible to add a constant 1 to it before inverting it, as given by h(a, b)resk = 1/(s(a, b)resk + 1). (12) Several other similarity measures have been proposed based on the Resnik similarity such as the one by (Lin, 1998): s(a, b)lin = 2IC(LCS(a, b)) IC(a) + IC(b) . (13) Again to avoid the effect of zero similarity when defining the Lin’s distance we use: h(a, b)lin = 1/(s(a, b)lin + 1). (14) (Jiang and Conrath, 1997) directly define Jiang’s distance (h(a, b)jng): IC(a) + IC(b) −2IC(LCS(a, b)). (15) 3.2.1 Information Content of Genre Labels The notion of information content of a genre is not straightforward. We use two ways of measuring the frequency freq of a genre, depending on its interpretation. Genre Frequency based on Document Occurrence. We can interpret the “frequency” of a genre node simply as the number of all documents belonging to that genre (including any of its subgenres). Unfortunately, there are no estimates for genre frequencies on, for example, a representative sample of web documents. Therefore, we approximate genre frequencies from the document frequencies (dfs) in the training sets used in classification. Note that (i) for balanced class distributions this information will not be helpful and (ii) that this is a relatively poor substitute for an estimation on an independent, representative corpus. Genre Frequency based on Genre Labels. We can also use the labels/names of the genre nodes as the unit of frequency estimation. Then, the frequency of a genre node is the occurrence frequency of its label in a corpus plus the occurrence frequencies of the labels of all its subnodes. Note that there is no direct correspondence between this measure and the document frequency of a genre: measuring the number of times the potential genre label poem occurs in a corpus is not in any way equivalent to the number of poems in that corpus. However, the measure is still structurally aware as frequencies of labels of subnodes are included, i.e. a higher level genre label will have higher frequency (and lower information content) than a lower level genre label.1 For label frequency estimation, we manually expand any label abbreviations (such as "newsp" for BNC genre labels), delete stop words and function words and then use two search methods. For the search method word we simply search the frequency of the genre label in a corpus, using three different corpora (the BNC, Brown and Google web search). As for the BNC and Brown corpus some labels are very rarely mentioned, we for these two corpora use also a search method gram where all character 5-grams within the genre label are searched for and their frequencies aggregated. 3.3 Terminology Algorithms are prefixed by the kind of distance measure they employ — IC for Information content and p for path-based). If the measure is infor1Obviously when using this measure we rely on genre labels which are meaningful in the sense that lower level labels were chosen to be more specific and therefore probably rarer terms in a corpus. The measure could not possibly be useful on a genre hierarchy that would give random names to its genres such as genre 1. 752 mation content based the specific measure is mentioned next, such as lin. The way for measuring genre frequency is indicated last with df for measuring via document frequency and word/gram when measured via frequency of genre labels. If frequencies of genre labels are used, the corpus for counting the occurrence of genre labels is also indicated via brown, bnc or the Web as estimated by Google hit counts gg. Standard non-structural SVMs are indicated by flat. 4 Experiments 4.1 Datasets We use four genre-annotated corpora for genre classification: the Brown Corpus (Kuˇcera and Francis, 1967), BNC (Lee, 2001), HGC (Stubbe and Ringlstetter, 2007) and Syracuse (Crowston et al., 2009). They have a wide variety of genre labels (from 15 in the Brown corpus to 32 genres in HGC to 70 in the BNC to 292 in Syracuse), and different types of hierarchies. 4.2 Evaluation Measures We use standard classification accuracy (Acc) on the most fine-grained level of target categories in the genre hierarchy. In addition, given a structural distance H, misclassifications can be weighted based on the distance measure. This allows us to penalize incorrect predictions which are further away in the hierarchy (such as between government documents and westerns) more than "close" mismatches (such as between science fiction and westerns). Formally, given the classification confusion matrix M then each Mab for a ̸= b contains the number of class a documents that are misclassified into class b. To achieve proper normalization in giving weights to misclassified entries, we can redistribute a total weight k −1 to each row of H proportionally to its values, where k is the number of genres. That is, given g the row summation of H, we define a weight matrix Q by normalizing the rows of H in a way given by Qab = (k −1)hab/ga, a ̸= b. We further assign a unit value to the diagonal of Q. Then it is possible to construct a structurally-aware measure (S-Acc): S-Acc = X a Maa/ X a,b MabQab. (16) 4.3 Experimental Setup We compare structural SVMs using all path-based and information-content based measures (see also Section 3.3). As a baseline we use the accuracy achieved by a standard "flat" SVM. We use 10-fold (randomised) cross validation throughout. In each fold, for each genre class 10% of documents are used for testing. For the remaining 90%, a portion of 10% are sampled for parameter tuning, leaving 80% for training. In each round the validation set is used to help determine the best C associated with Equation (5) based on the validation accuracy from the candidate list 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1. Note via this experiment setup, all methods are tuned to their best performance. For any algorithm comparison, we use a McNemar test with the significance level of 5% as recommended by (Dietterich, 1998). 4.4 Features The features used for genre classification are character 4-grams for all algorithms, i.e. each document is represented by a binary vector indicating the existence of each character 4-gram. We used character n-grams because they are very easy to extract, language-independent (no need to rely on parsing or even stemming), and they are known to have the best performance in genre classification tasks (Kanaris and Stamatatos, 2009; Sharoff et al., 2010). 4.5 Brown Corpus Results The Brown Corpus has 500 documents and is organized in a hierarchy with a depth of 3. It contains 15 end-level genres. In one experiment in (Karlgren and Cutting, 1994) the subgenres under fiction are grouped together, leading to 10 genres to classify. Results on 10-genre Brown Corpus. A standard flat SVM achieves an accuracy of 64.4% whereas the best structural SVM based on Lin’s information content distance measure (IC-linword-bnc) achieves 68.8% accuracy, significantly better at the 1% level. The result is also significantly better than prior work on the Brown corpus in (Karlgren and Cutting, 1994) (who use the whole corpus as test as well as training data). Table 1 summarizes the best performing measures that all outperform the flat SVM at the 1% level. 753 Table 1: Brown 10-genre Classification Results. Method Accuracy Karlgren and Cutting, 1994 65 (Training) Flat SVM 64.40 SSVM(IC-lin-word-bnc) 68.80 SSVM(IC-lin-word-br) 68.60 SSVM(IC-lin-gram-br) 67.80 Figure 2 provides the box plots of accuracy scores. The dashed boxes indicate that the distance measures perform significantly worse than the best performing IC-lin-word-bnc at the bottom. The solid boxes indicate the corresponding measures are statistically comparable to the IC-lin-word-bnc in terms of the mean accuracy they can achieve. 50 55 60 65 70 75 80 IC−lin−word−bnc IC−lin−word−br IC−jng−df pwupal IC−lin−gram−br IC−resk−word−bnc IC−resk−word−gg plen IC−resk−df IC−lin−gram−bnc IC−resk−gram−br IC−lin−df IC−resk−gram−bnc IC−resk−word−br IC−lin−word−gg plsk pmax IC−jng−word−br IC−jng−word−bnc flat IC−jng−gram−bnc IC−jng−gram−br IC−jng−word−gg Accuracy Figure 2: Accuracy on Brown Corpus (10 genres). Results on 15-genre Brown Corpus. We perform experiments on all 15 genres on the end level of the Brown corpus. The increase of genre classes leads to reduced classification performance. In our experiment, the flat SVM achieves an accuracy of 52.40%, and the structural SVM using path length measure achieves 55.40%, a difference significant at the 5% level. The structural SVMs using information content measures IC-lin-gram-bnc and ICresk-word-br also perform equally well. In addition, we improve on the training accuracy of 52% reported in (Karlgren and Cutting, 1994). We are also interested in structural accuracy (SAcc) to see whether the structural SVMs make fewer "big" mistakes. Table 2 shows a cross comparison of structural accuracy. Each row shows how accurate the corresponding method is under the structural accuracy criteria given in the column. The ’no-struct’ column corresponds to vanilla accuracy. It is natural to expect each diagonal entry of the numeric table to be the highest, since the respective method is optimised for its own structural distance. However, in our case, Lin’s information content measure and the plen measure perform well under any structural accuracy evaluation measure and outperform flat SVMs. 4.6 Other Corpora In spite of the promising results on the Brown Corpus, structural SVMs on other corpora (BNC, HGC, Syracuse) did not show considerable improvement. HGC contains 1330 documents divided into 32 approximately equally frequent classes. Its hierarchy has just two levels. Standard accuracy for the best performing structural methods on HGC is just the same as for flat SVM (69.1%), with marginally better structural accuracy (for example, 71.39 vs. 71.04%, using a path-length based structural accuracy). The BNC corpus contains 70 genres and 4053 documents. The number of documents per class ranges from 2 to 501. The accuracy of SSVM is also just comparable to flat SVM (73.6%). The Syracuse corpus is a recently developed large collection of 3027 annotated webpages divided into 292 genres (Crowston et al., 2009). Focusing only on genres containing 15 or more examples, we arrived at a corpus of 2293 samples and 52 genres. Accuracy for flat (53.3%) and structural SVMs (53.7%) are again comparable. 5 Discussion Given that structural learning can help in topical classification tasks (Tsochantaridis et al., 2005; Dekel et al., 2004), the lack of success on genres is surprising. We now discuss potential reasons for this lack of success. 5.1 Tree Depth and Balance Our best results were achieved on the Brown corpus, whose genre tree has at least three attractive properties. Firstly, it has a depth greater than 2, i.e. several levels are distinguished. Secondly, it seems visually balanced: branches from root to leaves (or terminals) are of pretty much equal length; branching factors are similar, for example ranging between 2 and 6 for the last level of branching. Thirdly, the number of examples at 754 Table 2: Structural Accuracy on Brown 15-genre Classification. Method no-struct (=typical accuracy) IC-lin-gram-bnc plen IC-resk-word-br IC-jng-word-gg flat 52.40 55.34 60.60 58.91 52.19 IC-lin-gram-bnc 55.00 58.15 63.59 61.83 53.85 plen 55.40 58.74 64.51 62.61 54.27 IC-resk-word-br 55.00 58.24 63.96 62.08 54.08 IC-jng-word-gg 46.00 49.00 54.89 53.01 52.58 each leaf node is roughly comparable (distributional balance). The other hierarchies violate these properties to a large extent. Thus, the genres in HGC are almost represented by a flat list with just one extra level over 32 categories. Similarly, the vast majority of genres in the Syracuse corpus are also organised in two levels only. Such flat hierarchies do not offer much scope to improve over a completely flat list. There are considerably more levels in the BNC for some branches, e.g., written/national/broadsheet/arts, but many other genres are still only specified to the second level of its hierarchy, e.g., written/adverts. In addition, the BNC is also distributionally imbalanced, i.e. the number of documents per class varies from 2 to 501 documents. To test our hypothesis, we tried to skew the Brown genre tree in two ways. First, we kept the tree relatively balanced visually and distributionally but flattened it by removing the second layer Press, Misc, Non-Fiction, Fiction from the hierarchy, leaving a tree with only two layers. Second, we skewed the visual and distributional balance of the tree by collapsing its three leaf-level genres under Press, and the two under non-fiction, leading to 12 genres to classify (cf. Figure 1). 30 35 40 45 50 55 60 65 70 IC−resk−word−bnc IC−resk−gram−bnc IC−resk−word−br IC−lin−gram−bnc plen pwupal IC−lin−word−br IC−resk−word−gg IC−lin−df IC−lin−word−bnc IC−lin−gram−br IC−jng−df flat IC−resk−df plsk IC−resk−gram−br pmax IC−lin−word−gg IC−jng−gram−bnc IC−jng−gram−br IC−jng−word−br IC−jng−word−bnc IC−jng−word−gg Accuracy Figure 3: Accuracy on flattened Brown Corpus (15 genres). 35 40 45 50 55 60 65 70 75 IC−resk−word−br IC−resk−gram−bnc pmax IC−resk−gram−br IC−resk−df IC−lin−word−bnc pwupal plen IC−resk−word−bnc plsk IC−lin−gram−br flat IC−lin−word−br IC−lin−df IC−lin−gram−bnc IC−jng−gram−br IC−jng−df IC−resk−word−gg IC−lin−word−gg IC−jng−gram−bnc IC−jng−word−br IC−jng−word−bnc IC−jng−word−gg Accuracy Figure 4: Accuracy on skewed Brown Corpus (12 genres). As expected, the structural methods on either skewed or flattened hierarchies are not significantly better than the flat SVM. For the flattened hierarchy of 15 leaf genres the maximal accuracy is 54.2% vs. 52.4% for the flat SVM (Figure 3), a non-significant improvement. Similarly, the maximal accuracy on the skewed 12-genre hierarchy is 58.2% vs. 56% (see also Figure 4), again a not significant improvement. To measure the degree of balance of a tree, we introduce two tree balance scores based on entropy. First, for both measures we extend all branches to the maximum depth of the tree. Then level by level we calculate an entropy score, either according to how many tree nodes at the next level belong to a node at this level (denoted as vb: visual balance), or according to how many end level documents belong to a node at this level (denoted as db: distribution balance). To make trees with different numbers of internal nodes and leaves more comparable, the entropy score at each level is normalized by the maximal entropy achieved by a tree with uniform distribution of nodes/documents, which is simply −log(1/N), where N denotes the number of nodes at the corre755 sponding level. Finally, the entropy scores for all levels are averaged. It can be shown that any perfect N-ary tree will have the largest visual balance score of 1. If in addition its nodes at each level contain the same number of documents, the distribution balance score will reach the maximum, too. Table 3 shows the balance scores for all the corpora we use. The first two rows for the Brown corpus have both large visual balance and distribution balance scores. As shown earlier, for those two setups the structural SVMs perform better than the flat approach. In contrast, for the tree hierarchies of Brown that we deformed or flattened, and also BNC and Syracuse, either or both of the two balance scores tend to be lower, and no improvement has been obtained over the flat approach. This may indicate that a further exploration of the relation between tree balance and the performance of structural SVMs is warranted. However, high visual balance and distribution scores do not necessarily imply high performance of structural SVMs, as very flat trees are also visually very balanced. As an example, HGC has a high visual balance score due to a shallow hierarchy and a high distributional balance score due to a roughly equal number of documents contained in each genre. However, HGC did not benefit from structural learning as it is also a very shallow hierarchy; therefore we think that a third variable depth also needs to be taken into account. A similar observation on the importance of well-balanced hierarchies comes from a recent Pascal challenge on large scale hierarchical text classification,2 which shows that some flat approaches perform competitively in topic classification with imbalanced hierarchies. However, the participants do not explore explicitly the relation between tree balance and performance. Other methods for measuring tree balance (some of which are related to ours) are used in the field of phylogenetic research (Shao and Sokal, 1990) but they are only applicable to visual balance. In addition, the methods they used often provide conflicting results on which trees are considered as balanced (Shao and Sokal, 1990). 5.2 Distance Measures We also scrutinise our distance measures as these are crucial for the structural approach. We notice that simple path length based measures per2http://lshtc.iit.demokritos.gr/ Table 3: Tree Balance Scores Corpus depth vb db Brown (10 genres) 3 0.9115 0.9024 Brown (15 genres) 3 0.9186 0.9083 Brown (15, flattened) 2 0.9855 0.8742 Brown (12, skewed) 3 0.8747 0.8947 HGC (32) 2 0.9562 0.9570 BNC (70) 4 0.9536 0.8039 Syracuse (52) 3 0.9404 0.8634 form well overall; again for the Brown corpus this is probably due to its balanced hierarchy which makes path length appropriate. There are other probable reasons why information content based measures do not perform better than pathlength based ones. When measured via document frequency in a corpus we do not have sufficiently large, representative genre-annotated corpora to hand. When measured via genre label frequency, we run into at least two problems. Firstly, as mentioned in Section 3.2.1 genre label frequency does not have to correspond to class frequency of documents. Secondly, the labels used are often abbreviations (e.g. W_institut_doc, W_newsp_brdsht_nat_social in BNC Corpus), underspecified (other, misc, unclassified) or a collection of phrases (e.g. belles letters, etc. in Brown). This made search for frequency very approximate and also loosens the link between label and content. We investigated in more depth how well the different distance measures are aligned. We adapt the alignment measure between kernels (Cristianini et al., 2002), to investigate how close the distance matrices are. For two distance matrices H1 and H2, their alignment A(H1, H2) is defined as: < H1, H2 >F √< H1, H1 >F , < H2, H2 >F , (17) where < H1, H2 >F = Pk i,j H1(gi, gj)H2(gi, gj) which is the total sum of the entry-wise products between the two distance matrices. Figure 5 shows several distance matrices on the (original) 15 genre Brown corpus. The plen matrix has clear blocks for the super genres press, informative, imaginative, etc. The IC-lin-gram-bnc matrix refines distances in the blocks, due to the introduction of information content. It keeps an alignment score that is over 0.99 (the maximum is 1.00) toward the plen matrix, and still has visible block patterns. However, the IC-jng-word-bnc significantly adjusts the 756 distance entries, has a much lower alignment score with the plen matrix, and doesn’t reveal apparent blocks. This partially explains the bad performance of the Jiang distance measure on the Brown corpus (see Section 4). The diagrams also show the high closeness between the best performing IC measure and the simple path length based measure. plen Informative Imaginative Press Misc nonfiction IC−lin−gram−bnc (0.98376) Informative Imaginative Press Misc nonfiction plsk (0.96061) Informative Imaginative Press Misc nonfiction IC−jng−word−bnc (0.92993) Informative Imaginative Press Misc nonfiction Figure 5: Distance Matrices on Brown. Values in bracket is the alignment with the plen matrix An alternative to structural distance measures would be distance measures between the genres based on pairwise cosine similarities between them. To assess this, we aggregated all character 4-gram training vectors of each genre and calculated standard cosine similarities. Note that these similarities are based on the documents only and do not make use of the Brown hierarchy at all. After converting the similarities to distance, we plug the distance matrix into our structural SVM. However, accuracy on the Brown corpus (15 genres) was almost the same as for a flat SVM. Inspecting the distance matrix visually, we determined that the cosine similarity could clearly distinguish between Fiction and Non-Fiction texts but not between any other genres. This also indicates that the genre structural hierarchy clearly gives information not present in the simple character 4-gram features we use. For a more detailed discussion of the problems of the currently prevalently used character n-grams as features for genre classification, we refer the reader to (Sharoff et al., 2010). 6 Conclusions In this paper, we have evaluated structural learning approaches to genre classification using several different genre distance measures. Although we were able to improve on non-structural approaches for the Brown corpus, we found it hard to improve over flat SVMs on other corpora. As potential reasons for this negative result, we suggest that current genre hierarchies are either not of sufficient depth or are visually or distributionally imbalanced. We think further investigation into the relationship between hierarchy balance and structural learning is warranted. Further investigation is also needed into the appropriateness of n-gram features for genre identification as well as good measures of genre distance. In the future, an important task would be the refinement or unsupervised generation of new hierarchies, using information theoretic or data-driven approaches. For a full assessment of hierarchical learning for genre classification, the field of genre studies needs a testbed similar to the Reuters or 20 Newsgroups datasets used in topic-based IR with a balanced genre hierarchy and a representative corpus of reliably annotated webpages. With regard to algorithms, we are also interested in other formulations for structural SVMs and their large-scale implementation as well as the combination of different distance measures, for example in ensemble learning. Acknowledgements We would like to thank the authors of each corpus collection, who invested a lot of effort into producing them. We are also grateful to Google Inc. for supporting this research via their Google Research Awards programme. References Boser, B. E., Guyon, I. M., and Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In COLT ’92: Proceedings of the fifth annual workshop on Computational learning theory, pages 144–152, New York, NY, USA. ACM. Crammer, K. and Singer, Y. (2002). On the algorithmic implementation of multiclass kernelbased vector machines. J. Mach. Learn. Res., 2:265–292. Cristianini, N., Shawe-Taylor, J., and Kandola, J. (2002). On kernel target alignment. In Proceedings of the Neural Information Process757 ing Systems, NIPS’01, pages 367–373. MIT Press. Crowston, K., Kwasnik, B., and Rubleske, J. (2009). Problems in the use-centered development of a taxonomy of web genres. In Mehler, A., Sharoff, S., and Santini, M., editors, Genres on the Web: Computational Models and Empirical Studies. Springer, Berlin/New York. Dekel, O., Keshet, J., and Singer, Y. (2004). Large margin hierarchical classification. In ICML ’04: Proceedings of the twenty-first international conference on Machine learning, page 27, New York, NY, USA. ACM. Dietterich, T. G. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10:1895–1923. Giesbrecht, E. and Evert, S. (2009). Part-ofSpeech (POS) Tagging - a solved task? An evaluation of POS taggers for the Web as corpus. In Proceedings of the Fifth Web as Corpus Workshop (WAC5), pages 27–35, Donostia-San Sebastián. Jiang, J. J. and Conrath, D. W. (1997). Semantic similarity based on corpus statistics and lexical taxonomy. CoRR, cmp-lg/9709008. Joachims, T. (1999). Making large-scale SVM learning practical. In Schölkopf, B., Burges, C., and Smola, A., editors, Advances in Kernel Methods – Support Vector Learning, pages 41–56. MIT Press. Joachims, T., Finley, T., and Yu, C.-N. (2009). Cutting-plane training of structural svms. Machine Learning, 77(1):27–59. Kanaris, I. and Stamatatos, E. (2009). Learning to recognize webpage genres. Information Processing and Management, 45:499–512. Karlgren, J. and Cutting, D. (1994). Recognizing text genres with simple metrics using discriminant analysis. In Proc. of the 15th. International Conference on Computational Linguistics (COLING 94), pages 1071 – 1075, Kyoto, Japan. Keerthi, S. S., Sundararajan, S., Chang, K.-W., Hsieh, C.-J., and Lin, C.-J. (2008). A sequential dual method for large scale multiclass linear svms. In KDD ’08: Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 408–416, New York, NY, USA. ACM. Kessler, B., Nunberg, G., and Schütze, H. (1997). Automatic detection of text genre. In Proceedings of the 35th ACL/8th EACL, pages 32–38. Kuˇcera, H. and Francis, W. N. (1967). Computational analysis of present-day American English. Brown University Press, Providence. Leacock, C. and Chodorow, M. (1998). Combining local context and WordNet similarity for word sense identification, pages 305–332. In C. Fellbaum (Ed.), MIT Press. Lee, D. (2001). Genres, registers, text types, domains, and styles: clarifying the concepts and navigating a path through the BNC jungle. Language Learning and Technology, 5(3):37–72. Lin, D. (1998). An information-theoretic definition of similarity. In ICML ’98: Proceedings of the Fifteenth International Conference on Machine Learning, pages 296–304, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Meyer zu Eissen, S. and Stein, B. (2004). Genre classification of web pages. In Proceedings of the 27th German Conference on Artificial Intelligence, Ulm, Germany. Pedersen, T., Pakhomov, S. V. S., Patwardhan, S., and Chute, C. G. (2007). Measures of semantic similarity and relatedness in the biomedical domain. J. of Biomedical Informatics, 40(3):288–299. Resnik, P. (1995). Using information content to evaluate semantic similarity in a taxonomy. In IJCAI’95: Proceedings of the 14th international joint conference on Artificial intelligence, pages 448–453, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. 758 Santini, M. (2007). Automatic Identification of Genre in Web Pages. PhD thesis, University of Brighton. Shao, K.-T. and Sokal, R. R. (1990). Tree balance. Systematic Zoology, 39(3):266–276. Sharoff, S., Wu, Z., and Markert, K. (2010). The Web library of Babel: evaluating genre collections. In Proc. of the Seventh Language Resources and Evaluation Conference, LREC 2010, Malta. Stubbe, A. and Ringlstetter, C. (2007). Recognizing genres. In Santini, M. and Sharoff, S., editors, Proc. Towards a Reference Corpus of Web Genres. Tsochantaridis, I., Joachims, T., Hofmann, T., and Altun, Y. (2005). Large margin methods for structured and interdependent output variables. J. Mach. Learn. Res., 6:1453–1484. Vidulin, V., Luštrek, M., and Gams, M. (2007). Using genres to improve search engines. In Proc. Towards Genre-Enabled Search Engines: The Impact of NLP. RANLP-07. Webber, B. (2009). Genre distinctions for discourse in the Penn TreeBank. In Proc the 47th Annual Meeting of the ACL, pages 674– 682. Wu, Z. and Palmer, M. (1994). Verbs semantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Linguistics, pages 133–138, Morristown, NJ, USA. Association for Computational Linguistics. 759
2010
77
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 760–769, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Metadata-Aware Measures for Answer Summarization in Community Question Answering Mattia Tomasoni ∗ Dept. of Information Technology Uppsala University, Uppsala, Sweden [email protected] Minlie Huang Dept. Computer Science and Technology Tsinghua University, Beijing 100084, China [email protected] Abstract This paper presents a framework for automatically processing information coming from community Question Answering (cQA) portals with the purpose of generating a trustful, complete, relevant and succinct summary in response to a question. We exploit the metadata intrinsically present in User Generated Content (UGC) to bias automatic multi-document summarization techniques toward high quality information. We adopt a representation of concepts alternative to n-grams and propose two concept-scoring functions based on semantic overlap. Experimental results on data drawn from Yahoo! Answers demonstrate the effectiveness of our method in terms of ROUGE scores. We show that the information contained in the best answers voted by users of cQA portals can be successfully complemented by our method. 1 Introduction Community Question Answering (cQA) portals are an example of Social Media where the information need of a user is expressed in the form of a question for which a best answer is picked among the ones generated by other users. cQA websites are becoming an increasingly popular complement to search engines: overnight, a user can expect a human-crafted, natural language answer tailored to her specific needs. We have to be aware, though, that User Generated Content (UGC) is often redundant, noisy and untrustworthy (Jeon et al., ∗The research was conducted while the first author was visiting Tsinghua University. 2006; Wang et al., 2009b; Suryanto et al., 2009). Interestingly, a great amount of information is embedded in the metadata generated as a byproduct of users’ action and interaction on Social Media. Much valuable information is contained in answers other than the chosen best one (Liu et al., 2008). Our work aims to show that such information can be successfully extracted and made available by exploiting metadata to distill cQA content. To this end, we casted the problem to an instance of the query-biased multi-document summarization task, where the question was seen as a query and the available answers as documents to be summarized. We mapped each characteristic that an ideal answer should present to a measurable property that we wished the final summary could exhibit: • Quality to assess trustfulness in the source, • Coverage to ensure completeness of the information presented, • Relevance to keep focused on the user’s information need and • Novelty to avoid redundancy. Quality of the information was assessed via Machine Learning (ML) techniques under best answer supervision in a vector space consisting of linguistic and statistical features about the answers and their authors. Coverage was estimated by semantic comparison with the knowledge space of a corpus of answers to similar questions which had been retrieved through the Yahoo! Answers API 1. Relevance was computed as information overlap between an answer and its question, while Novelty was calculated as inverse overlap with all other answers to the same question. A score was assigned to each concept in an answer according to 1http://developer.yahoo.com/answers 760 the above properties. A score-maximizing summary under a maximum coverage model was then computed by solving an associated Integer Linear Programming problem (Gillick and Favre, 2009; McDonald, 2007). We chose to express concepts in the form of Basic Elements (BE), a semantic unit developed at ISI2 and modeled semantic overlap as intersection in the equivalence classes of two concepts (formal definitions will be given in section 2.3). The objective of our work was to present what we believe is a valuable conceptual framework; more advance machine learning and summarization techniques would most likely improve the performances. The remaining of this paper is organized as follows. In the next section Quality, Coverage, Relevance and Novelty measures are presented; we explain how they were calculated and combined to generate a final summary of all answers to a question. Experiments are illustrated in Section 3, where we give evidence of the effectiveness of our method. We list related work in Section 5, discuss possible alternative approaches in Section 4 and provide our conclusions in Section 6. 2 The summarization framework 2.1 Quality as a ranking problem Quality assessing of information available on Social Media had been studied before mainly as a binary classification problem with the objective of detecting low quality content. We, on the other hand, treated it as a ranking problem and made use of quality estimates with the novel intent of successfully combining information from sources with different levels of trustfulness and writing ability. This is crucial when manipulating UGC, which is known to be subject to particularly great variance in credibility (Jeon et al., 2006; Wang et al., 2009b; Suryanto et al., 2009) and may be poorly written. An answer a was given along with information about the user u that authored it, the set TAq (Total Answers) of all answers to the same question q and the set TAu of all answers by the same user. Making use of results available in the literature (Agichtein et al., 2008) 3, we designed a Quality 2Information Sciences Institute, University of Southern California, http://www.isi.edu 3A long list of features is proposed; training a classifier on all of them would no doubt increase the performances. feature space to capture the following syntactic, behavioral and statistical properties: • ϑ, length of answer a • ς, number of non-stopwords in a with a corpus frequency larger than n (set to 5 in our experiments) • ϖ, points awarded to user u according to the Yahoo! Answers’ points system • ϱ, ratio of best answers posted by user u The features mentioned above determined a space Ψ; An answer a, in such feature space, assumed the vectorial form: Ψa = ( ϑ, ς, ϖ, ϱ ) Following the intuition that chosen best answers (a⋆) carry high quality information, we used supervised ML techniques to predict the probability of a to have been selected as a best answer a⋆. We trained a Linear Regression classifier to learn the weight vector W = (w1, w2, w3, w4) that would combine the above feature. Supervision was given in the form of a training set TrQ of labeled pairs defined as: TrQ = {⟨Ψa, isbesta ⟩} isbesta was a boolean label indicating whether a was an a⋆answer; the training set size was determined experimentally and will be discussed in Section 3.2. Although the value of isbesta was known for all answers, the output of the classifier offered us a real-valued prediction that could be interpreted as a quality score Q(Ψa): Q(Ψa) ≈ P( isbesta = 1 | a, u, TAu, ) ≈ P( isbesta = 1 | Ψa ) = W T · Ψa (1) The Quality measure for an answer a was approximated by the probability of such answer to be a best answer (isbesta = 1) with respect to its author u and the sets TAu and TAq. It was calculated as dot product between the learned weight vector W and the feature vector for answer Ψa. Our decision to proceed in an unsupervised direction came from the consideration that any use of external human annotation would have made it impracticable to build an actual system on larger scale. An alternative, completely unsupervised approach to quality detection that has not undergone experimental analysis is discussed in Section 4. 761 2.2 Bag-of-BEs and semantic overlap The properties that remain to be discussed, namely Coverage, Relevance and Novelty, are measures of semantic overlap between concepts; a concept is the smallest unit of meaning in a portion of written text. To represent sentences and answers we adopted an alternative approach to classical ngrams that could be defined bag-of-BEs. a BE is “a head|modifier|relation triple representation of a document developed at ISI” (Zhou et al., 2006). BEs are a strong theoretical instrument to tackle the ambiguity inherent in natural language that find successful practical applications in realworld query-based summarization systems. Different from n-grams, they are variant in length and depend on parsing techniques, named entity detection, part-of-speech tagging and resolution of syntactic forms such as hyponyms, pronouns, pertainyms, abbreviation and synonyms. To each BE is associated a class of semantically equivalent BEs as result of what is called a transformation of the original BE; the mentioned class uniquely defines the concept. What seemed to us most remarkable is that this makes the concept contextdependent. A sentence is defined as a set of concepts and an answer is defined as the union between the sets that represent its sentences. The rest of this section gives formal definition of our model of concept representation and semantic overlap. From a set-theoretical point of view, each concepts c was uniquely associated with a set Ec = {c1, c2 . . . cm} such that: ∀i, j (ci ≈L c) ∧(ci ̸≡c) ∧(ci ̸≡cj) In our model, the “≡” relation indicated syntactic equivalence (exact pattern matching), while the “≈L” relation represented semantic equivalence under the convention of some language L (two concepts having the same meaning). Ec was defined as the set of semantically equivalent concepts to c, called its equivalence class; each concept ci in Ec carried the same meaning (≈L) of concept c without being syntactically identical (≡); furthermore, no two concepts i and j in the same equivalence class were identical. “Climbing a tree to escape a black bear is pointless because they can climb very well.” BE = they|climb Ec = {climb|bears, bear|go up, climbing|animals, climber|instincts, trees|go up, claws|climb...} Given two concepts c and k: c ▷◁k ( c ≡k or Ec ∩Ek ̸= ∅ We defined semantic overlap as occurring between c and k if they were syntactically identical or if their equivalence classes Ec and Ek had at least one element in common. In fact, given the above definition of equivalence class and the transitivity of “≡” relation, we have that if the equivalence classes of two concepts are not disjoint, then they must bare the same meaning under the convention of some language L; in that case we said that c semantically overlapped k. It is worth noting that relation “▷◁” is symmetric, transitive and reflexive; as a consequence all concepts with the same meaning are part of a same equivalence class. BE and equivalence class extraction were performed by modifying the behavior of the BEwT-E-0.3 framework 4. The framework itself is responsible for the operative definition of the “≈L” relation and the creation of the equivalence classes. 2.3 Coverage via concept importance In the scenario we proposed, the user’s information need is addressed in the form of a unique, summarized answer; information that is left out of the final summary will simply be unavailable. This raises the concern of completeness: besides ensuring that the information provided could be trusted, we wanted to guarantee that the posed question was being answered thoroughly. We adopted the general definition of Coverage as the portion of relevant information about a certain subject that is contained in a document (Swaminathan et al., 2009). We proceeded by treating each answer to a question q as a separate document and we retrieved through the Yahoo! Answers API a set TKq (Total Knowledge) of 50 answers 5 to questions similar to q: the knowledge space of TKq was chosen to approximate the entire knowledge space related to the queried question q. We calculated Coverage as a function of the portion of answers in TKq that presented semantic overlap with a. 4The authors can be contacted regarding the possibility of sharing the code of the modified version. Original version available from http://www.isi.edu/ publications/licensed-sw/BE/index.html. 5such limit was imposed by the current version of the API. Experiments with a greater corpus should be carried out in the future. 762 C(a, q) = X ci∈a γ(ci) · tf(ci, a) (2) The Coverage measure for an answer a was calculated as the sum of term frequency tf(ci, a) for concepts in the answer itself, weighted by a concept importance function, γ(ci), for concepts in the total knowledge space TKq. γ(c) was defined as follows: γ(c) = |TKq,c| |TKq| · log2 |TKq| |TKq,c| (3) where TKq,c = {d ∈TKq : ∃k ∈d, k ▷◁c} The function γ(c) of concept c was calculated as a function of the cardinality of set TKq and set TKq,c, which was the subset of all those answers d that contained at least one concept k which presented semantical overlap with c itself. A similar idea of knowledge space coverage is addressed by Swaminathan et al. (2009), from which formulas (2) and (3) were derived. A sensible alternative would be to estimate Coverage at the sentence level. 2.4 Relevance and Novelty via ▷◁relation To this point, we have addressed matters of trustfulness and completeness. Another widely shared concern for Information Retrieval systems is Relevance to the query. We calculated relevance by computing the semantic overlap between concepts in the answers and the question. Intuitively, we reward concepts that express meaning that could be found in the question to be answered. R(c, q) = |qc| |q| (4) where qc = {k ∈q : k ▷◁c} The Relevance measure R(c, q) of a concept c with respect to a question q was calculated as the ratio of the cardinality of set qc (containing all concepts in q that semantically overlapped with c) normalized by the total number of concepts in q. Another property we found desirable, was to minimize redundancy of information in the final summary. Since all elements in TAq (the set of all answers to q) would be used for the final summary, we positively rewarded concepts that were expressing novel meanings. N(c, q) = 1 −|TAq,c| |TAq| (5) where TAq,c = {d ∈TAq : ∃k ∈d, k ▷◁c} The Novelty measure N(c, q) of a concept c with respect to a question q was calculated as the ratio of the cardinality of set TAq,c over the cardinality of set TAq; TAq,c was the subset of all those answers d in TAq that contained at least one concept k which presented semantical overlap with c. 2.5 The concept scoring functions We have now determined how to calculate the scores for each property in formulas (1), (2), (4) and (5); under the assumption that the Quality and Coverage of a concept are the same of its answer, every concept c part of an answer a to some question q, could be assigned a score vector as follows: Φc = ( Q(Ψa), C(a, q), R(c, q), N(c, q) ) What we needed at this point was a function S of the above vector which would assign a higher score to concepts most worthy of being included in the final summary. Our intuition was that since Quality, Coverage, Novelty and Relevance were all virtues properties, S needed to be monotonically increasing with respect to all its dimensions. We designed two such functions. Function (6), which multiplied the scores, was based on the probabilistic interpretation of each score as an independent event. Further empirical considerations, brought us to later introduce a logarithmic component that would discourage inclusion of sentences shorter then a threshold t (a reasonable choice for this parameter is a value around 20). The score for concept c appearing in sentence sc was calculated as: SΠ(c) = 4 Y i=1 (Φc i) · logt(length(sc)) (6) A second approach that made use of human annotation to learn a vector of weights V = (v1, v2, v3, v4) that linearly combined the scores was investigated. Analogously to what had been done with scoring function (6), the Φ space was augmented with a dimension representing the length of the answer. SΣ(c) = 4 X i=1 (Φc i · vi) + length(sc) · v5 (7) In order to learn the weight vector V that would combine the above scores, we asked three human annotators to generate question-biased extractive summaries based on all answers available for a certain question. We trained a Linear Regression 763 classifier with a set TrS of labeled pairs defined as: TrS = {⟨(Φc, length(sc)), includec ⟩} includec was a boolean label that indicated whether sc, the sentence containing c, had been included in the human-generated summary; length(sc) indicated the length of sentence sc. Questions and relative answers for the generation of human summaries were taken from the “filtered dataset” described in Section 3.1. The concept score for the same BE in two separate answers is very likely to be different because it belongs to answers with their own Quality and Coverage values: this only makes the scoring function context-dependent and does not interfere with the calculation the Coverage, Relevance and Novelty measures, which are based on information overlap and will regard two BEs with overlapping equivalence classes as being the same, regardless of their score being different. 2.6 Quality constrained summarization The previous sections showed how we quantitatively determined which concepts were more worthy of becoming part of the final machine summary M. The final step was to generate the summary itself by automatically selecting sentences under a length constraint. Choosing this constraint carefully demonstrated to be of crucial importance during the experimental phase. We again opted for a metadata-driven approach and designed the length constraint as a function of the lengths of all answers to q (TAq) weighted by the respective Quality measures: lengthM = X a∈TAq length(a) · Q(Ψa) (8) The intuition was that the longer and the more trustworthy answers to a question were, the more space was reasonable to allocate for information in the final, machine summarized answer M. M was generated so as to maximize the scores of the concepts it included. This was done under a maximum coverage model by solving the following Integer Linear Programming problem: maximize: X i S(ci) · xi (9) subject to: X j length(j) · sj ≤lengthM X j yj · occij ≥xi ∀i (10) occij, xi, yj ∈{0, 1} ∀i, j occij = 1 if ci ∈sj, ∀i, j xi = 1 if ci ∈M, ∀i yj = 1 if sj ∈M, ∀j In the above program, M is the set of selected sentences: M = {sj : yj = 1, ∀j}. The integer variables xi and yj were equals to one if the corresponding concept ci and sentence sj were included in M. Similarly occij was equal to one if concept ci was contained in sentence sj. We maximized the sum of scores S(ci) (for S equals to SΠ or SΣ) for each concept ci in the final summary M. We did so under the constraint that the total length of all sentences sj included in M must be less than the total expected length of the summary itself. In addition, we imposed a consistency constraint: if a concept ci was included in M, then at least one sentence sj that contained the concept must also be selected (constraint (10)). The described optimization problem was solved using lp solve 6. We conclude with an empirical side note: since solving the above can be computationally very demanding for large number of concepts, we found performance-wise very fruitful to skim about one fourth of the concepts with lowest scores. 3 Experiments 3.1 Datasets and filters The initial dataset was composed of 216,563 questions and 1,982,006 answers written by 171,676 user in 100 categories from the Yahoo! Answers portal7. We will refer to this dataset as the “unfiltered version”. The metadata described in section 2.1 was extracted and normalized; quality experiments (Section 3.2) were then conducted. The unfiltered version was later reduced to 89,814 question-answer pairs that showed statistical and linguistic properties which made them particularly adequate for our purpose. In particular, trivial, factoid and encyclopedia-answerable questions were 6the version used was lp solve 5.5, available at http: //lpsolve.sourceforge.net/5.5 7The reader is encouraged to contact the authors regarding the availability of data and filters described in this Section. 764 removed by applying a series of patterns for the identification of complex questions. The work by Liu et al. (2008) indicates some categories of questions that are particularly suitable for summarization, but due to the lack of high-performing question classifiers we resorted to human-crafted question patterns. Some pattern examples are the following: • {Why,What is the reason} [...] • How {to,do,does,did} [...] • How {is,are,were,was,will} [...] • How {could,can,would,should} [...] We also removed questions that showed statistical values outside of convenient ranges: the number of answers, length of the longest answer and length of the sum of all answers (both absolute and normalized) were taken in consideration. In particular we discarded questions with the following characteristics: • there were less than three answers 8 • the longest answer was over 400 words (likely a copy-and-paste) • the sum of the length of all answers outside of the (100, 1000) words interval • the average length of answers was outside of the (50, 300) words interval At this point a second version of the dataset was created to evaluate the summarization performance under scoring function (6) and (7); it was generated by manually selecting questions that arouse subjective, human interest from the previous 89,814 question-answer pairs. The dataset size was thus reduced to 358 answers to 100 questions that were manually summarized (refer to Section 3.3). From now on we will refer to this second version of the dataset as the “filtered version”. 3.2 Quality assessing In Section 2.1 we claimed to be able to identify high quality content. To demonstrate it, we conducted a set of experiments on the original unfiltered dataset to establish whether the feature space Ψ was powerful enough to capture the quality of answers; our specific objective was to estimate the 8Being too easy to summarize or not requiring any summarization at all, those questions wouldn’t constitute an valuable test of the system’s ability to extract information. Figure 1: Precision values (Y-axis) in detecting best answers a⋆with increasing training set size (X-axis) for a Linear Regression classifier on the unfiltered dataset. amount of training examples needed to successfully train a classifier for the quality assessing task. The Linear Regression9 method was chosen to determine the probability Q(Ψa) of a to be a best answer to q; as explained in Section 2.1, those probabilities were interpreted as quality estimates. The evaluation of the classifier’s output was based on the observation that given the set of all answers TAq relative to q and the best answer a⋆, a successfully trained classifier should be able to rank a⋆ahead of all other answers to the same question. More precisely, we defined Precision as follows: |{q ∈TrQ : ∀a ∈TAq, Q(Ψa⋆) > Q(Ψa)}| |TrQ| where the numerator was the number of questions for which the classifier was able to correctly rank a⋆by giving it the highest quality estimate in TAq and the denominator was the total number of examples in the training set TrQ. Figure 1 shows the precision values (Y-axis) in identifying best answers as the size of TrQ increases (X-axis). The experiment started from a training set of size 100 and was repeated adding 300 examples at a time until precision started decreasing. With each increase in training set size, the experiment was repeated ten times and average precision values were calculated. In all runs, training examples were picked randomly from the unfiltered dataset described in Section 3.1; for details on TrQ see Section 2.1. A training set of 12,000 examples was chosen for the summarization experiments. 9Performed with Weka 3.7.0 available at http://www. cs.waikato.ac.nz/˜ml/weka 765 System a⋆(baseline) SΣ SΠ ROUGE-1 R 51.7% 67.3% 67.4% ROUGE-1 P 62.2% 54.0% 71.2% ROUGE-1 F 52.9% 59.3% 66.1% ROUGE-2 R 40.5% 52.2% 58.8% ROUGE-2 P 49.0% 41.4% 63.1% ROUGE-2 F 41.6% 45.9% 57.9% ROUGE-L R 50.3% 65.1% 66.3% ROUGE-L P 60.5% 52.3% 70.7% ROUGE-L F 51.5% 57.3% 65.1% Table 1: Summarization Evaluation on filtered dataset (refer to Section 3.1 for details). ROUGE-L, ROUGE-1 and ROUGE-2 are presented; for each, Recall (R), Precision (P) and F-1 score (F) are given. 3.3 Evaluating answer summaries The objective of our work was to summarize answers from cQA portals. Two systems were designed: Table 1 shows the performances using function SΣ (see equation (7)), and function SΠ (see equation (6)). The chosen best answer a⋆ was used as a baseline. We calculated ROUGE-1 and ROUGE-2 scores10 against human annotation on the filtered version of the dataset presented in Section 3.1. The filtered dataset consisted of 358 answers to 100 questions. For each questions q, three annotators were asked to produce an extractive summary of the information contained in TAq by selecting sentences subject to a fixed length limit of 250 words. The annotation resulted in 300 summaries (larger-scale annotation is still ongoing). For the SΣ system, 200 of the 300 generated summaries were used for training and the remaining were used for testing (see the definition of TrS Section 2.5). Cross-validation was conducted. For the SΠ system, which required no training, all of the 300 summaries were used as the test set. SΣ outperformed the baseline in Recall (R) but not in Precision (P); nevertheless, the combined F1 score (F) was sensibly higher (around 5 points percentile). On the other hand, our SΠ system showed very consistent improvements of an order of 10 to 15 points percentile over the baseline on all measures; we would like to draw attention on the fact that even if Precision scores are higher, it is on Recall scores that greater improvements were achieved. This, together with the results obtained by SΣ, suggest performances could benefit 10Available at http://berouge.com/default. aspx Figure 2: Increase in ROUGE-L, ROUGE-1 and ROUGE2 performances of the SΠ system as more measures are taken in consideration in the scoring function, starting from Relevance alone (R) to the complete system (RQNC). F-1 scores are given. from the enforcement of a more stringent length constraint than the one proposed in (8). Further potential improvements on SΣ could be obtained by choosing a classifier able to learn a more expressive underlying function. In order to determine what influence the single measures had on the overall performance, we conducted a final experiment on the filtered dataset to evaluate (the SΠ scoring function was used). The evaluation was conducted in terms of F-1 scores of ROUGE-L, ROUGE-1 and ROUGE-2. First only Relevance was tested (R) and subsequently Quality was added (RQ); then, in turn, Coverage (RQC) and Novelty (RQN); Finally the complete system taking all measures in consideration (RQNC). Results are shown in Figure 2. In general performances increase smoothly with the exception of ROUGE-2 score, which seems to be particularly sensitive to Novelty: no matter what combination of measures is used (R alone, RQ, RQC), changes in ROUGE-2 score remain under one point percentile. Once Novelty is added, performances rise abruptly to the system’s highest. A summary example, along with the question and the best answer, is presented in Table 2. 4 Discussion and Future Directions We conclude by discussing a few alternatives to the approaches we presented. The lengthM constraint for the final summary (Section 2.6), could have been determined by making use of external knowledge such as TKq: since TKq represents 766 HOW TO PROTECT YOURSELF FROM A BEAR? http://answers.yahoo.com/question/index?qid= 20060818062414AA7VldB ***BEST ANSWER*** Great question. I have done alot of trekking through California, Montana and Wyoming and have met Black bears (which are quite dinky and placid but can go nuts if they have babies), and have been half an hour away from (allegedly) the mother of all grizzley s whilst on a trail through Glacier National park - so some other trekkerers told me... What the park wardens say is SING, SHOUT, MAKE NOISE...do it loudly, let them know you are there..they will get out of the way, it is a surprised bear wot will go mental and rip your little legs off..No fun permission: anything that will confuse them and stop them in their tracks...I have been told be an native american buddy that to keep a bottle of perfume in your pocket...throw it at the ground near your feet and make the place stink: they have good noses, them bears, and a mega concentrated dose of Britney Spears Obsessive Compulsive is gonna give em something to think about...Have you got a rape alarm? Def take that...you only need to distract them for a second then they will lose interest..Stick to the trails is the most important thing, and talk to everyone you see when trekking: make sure others know where you are. ***SUMMARIZED ANSWER*** [...] In addition if the bear actually approaches you or charges you.. still stand your ground. Many times they will not actually come in contact with you, they will charge, almost touch you than run away. [...] The actions you should take are different based on the type of bear. for example adult Grizzlies can t climb trees, but Black bears can even when adults. They can not climb in general as thier claws are longer and not semi-retractable like a Black bears claws. [...] I truly disagree with the whole play dead approach because both Grizzlies and Black bears are oppurtunistic animals and will feed on carrion as well as kill and eat animals. Although Black bears are much more scavenger like and tend not to kill to eat as much as they just look around for scraps. Grizzlies on the other hand are very accomplished hunters and will take down large prey animals when they want. [...] I have lived in the wilderness of Northern Canada for many years and I can honestly say that Black bears are not at all likely to attack you in most cases they run away as soon as they see or smell a human, the only places where Black bears are agressive is in parks with visitors that feed them, everywhere else the bears know that usually humans shoot them and so fear us. [...] Table 2: A summarized answer composed of five different portions of text generated with the SΠ scoring function; the chosen best answer is presented for comparison. The richness of the content and the good level of readability make it a successful instance of metadata-aware summarization of information in cQA systems. Less satisfying examples include summaries to questions that require a specific order of sentences or a compromise between strongly discordant opinions; in those cases, the summarized answer might lack logical consistency. the total knowledge available about q, a coverage estimate of the final answers against it would have been ideal. Unfortunately the lack of metadata about those answers prevented us from proceeding in that direction. This consideration suggests the idea of building TKq using similar answers in the dataset itself, for which metadata is indeed available. Furthermore, similar questions in the dataset could have been used to augment the set of answers used to generate the final summary with answers coming from similar questions. Wang et al. (2009a) presents a method to retrieve similar questions that could be worth taking in consideration for the task. We suggest that the retrieval method could be made Quality-aware. A Quality feature space for questions is presented by Agichtein et al. (2008) and could be used to rank the quality of questions in a way similar to how we ranked the quality of answers. The Quality assessing component itself could be built as a module that can be adjusted to the kind of Social Media in use; the creation of customized Quality feature spaces would make it possible to handle different sources of UGC (forums, collaborative authoring websites such as Wikipedia, blogs etc.). A great obstacle is the lack of systematically available high quality training examples: a tentative solution could be to make use of clustering algorithms in the feature space; high and low quality clusters could then be labeled by comparison with examples of virtuous behavior (such as Wikipedia’s Featured Articles). The quality of a document could then be estimated as a function of distance from the centroid of the cluster it belongs to. More careful estimates could take the position of other clusters and the concentration of nearby documents in consideration. Finally, in addition to the chosen best answer, a DUC-styled query-focused multi-document summary could be used as a baseline against which the performances of the system can be checked. 5 Related Work A work with a similar objective to our own is that of Liu et al. (2008), where standard multidocument summarization techniques are employed along with taxonomic information about questions. Our approach differs in two fundamental aspects: it took in consideration the peculiarities of the data in input by exploiting the nature of UGC and available metadata; additionally, along with relevance, we addressed challenges that are specific to Question Answering, such as Coverage and Novelty. For an investigation of Coverage in the context of Search Engines, refer to Swaminathan et al. (2009). At the core of our work laid information trustfulness, summarization techniques and alternative concept representation. A general approach to the broad problem of evaluating information credibility on the Internet is presented by Akamine et al. (2009) with a system that makes use of semantic-aware Natural Language Preprocessing techniques. With analogous goals, but a focus on UGC, are the papers of Stvilia et al. (2005), Mcguinness et al. (2006), Hu et al. (2007) and 767 Zeng et al. (2006), which present a thorough investigation of Quality and trust in Wikipedia. In the cQA domain, Jeon et al. (2006) presents a framework to use Maximum Entropy for answer quality estimation through non-textual features; with the same purpose, more recent methods based on the expertise of answerers are proposed by Suryanto et al. (2009), while Wang et al. (2009b) introduce the idea of ranking answers taking their relation to questions in consideration. The paper that we regard as most authoritative on the matter is the work by Agichtein et al. (2008) which inspired us in the design of the Quality feature space presented in Section 2.1. Our approach merged trustfulness estimation and summarization techniques: we adapted the automatic concept-level model presented by Gillick and Favre (2009) to our needs; related work in multi-document summarization has been carried out by Wang et al. (2008) and McDonald (2007). A relevant selection of approaches that instead make use of ML techniques for query-biased summarization is the following: Wang et al. (2007), Metzler and Kanungo (2008) and Li et al. (2009). An aspect worth investigating is the use of partially labeled or totally unlabeled data for summarization in the work of Wong et al. (2008) and Amini and Gallinari (2002). Our final contribution was to explore the use of Basic Elements document representation instead of the widely used n-gram paradigm: in this regard, we suggest the paper by Zhou et al. (2006). 6 Conclusions We presented a framework to generate trustful, complete, relevant and succinct answers to questions posted by users in cQA portals. We made use of intrinsically available metadata along with concept-level multi-document summarization techniques. Furthermore, we proposed an original use for the BE representation of concepts and tested two concept-scoring functions to combine Quality, Coverage, Relevance and Novelty measures. Evaluation results on human annotated data showed that our summarized answers constitute a solid complement to best answers voted by the cQA users. We are in the process of building a system that performs on-line summarization of large sets of questions and answers from Yahoo! Answers. Larger-scale evaluation of results against other state-of-the-art summarization systems is ongoing. Acknowledgments This work was partly supported by the Chinese Natural Science Foundation under grant No. 60803075, and was carried out with the aid of a grant from the International Development Research Center, Ottawa, Canada. We would like to thank Prof. Xiaoyan Zhu, Mr. Yang Tang and Mr. Guillermo Rodriguez for the valuable discussions and comments and for their support. We would also like to thank Dr. Chin-yew Lin and Dr. Eugene Agichtein from Emory University for sharing their data. References Eugene Agichtein, Carlos Castillo, Debora Donato, Aristides Gionis, and Gilad Mishne. 2008. Finding high-quality content in social media. In Marc Najork, Andrei Z. Broder, and Soumen Chakrabarti, editors, Proceedings of the International Conference on Web Search and Web Data Mining, WSDM 2008, Palo Alto, California, USA, February 11-12, 2008, pages 183–194. ACM. Susumu Akamine, Daisuke Kawahara, Yoshikiyo Kato, Tetsuji Nakagawa, Kentaro Inui, Sadao Kurohashi, and Yutaka Kidawara. 2009. Wisdom: a web information credibility analysis system. In ACLIJCNLP ’09: Proceedings of the ACL-IJCNLP 2009 Software Demonstrations, pages 1–4, Morristown, NJ, USA. Association for Computational Linguistics. Massih-Reza Amini and Patrick Gallinari. 2002. The use of unlabeled data to improve supervised learning for text summarization. In SIGIR ’02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 105–112, New York, NY, USA. ACM. Dan Gillick and Benoit Favre. 2009. A scalable global model for summarization. In ILP ’09: Proceedings of the Workshop on Integer Linear Programming for Natural Langauge Processing, pages 10–18, Morristown, NJ, USA. Association for Computational Linguistics. Meiqun Hu, Ee-Peng Lim, Aixin Sun, Hady Wirawan Lauw, and Ba-Quy Vuong. 2007. Measuring article quality in wikipedia: models and evaluation. In CIKM ’07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 243–252, New York, NY, USA. ACM. Jiwoon Jeon, W. Bruce Croft, Joon Ho Lee, and Soyeon Park. 2006. A framework to predict the quality of 768 answers with non-textual features. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 228–235, New York, NY, USA. ACM. Liangda Li, Ke Zhou, Gui-Rong Xue, Hongyuan Zha, and Yong Yu. 2009. Enhancing diversity, coverage and balance for summarization through structure learning. In WWW ’09: Proceedings of the 18th international conference on World wide web, pages 71–80, New York, NY, USA. ACM. Yuanjie Liu, Shasha Li, Yunbo Cao, Chin-Yew Lin, Dingyi Han, and Yong Yu. 2008. Understanding and summarizing answers in community-based question answering services. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 497–504, Manchester, UK, August. Coling 2008 Organizing Committee. Ryan T. McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Giambattista Amati, Claudio Carpineto, and Giovanni Romano, editors, ECIR, volume 4425 of Lecture Notes in Computer Science, pages 557–564. Springer. Deborah L. Mcguinness, Honglei Zeng, Paulo Pinheiro Da Silva, Li Ding, Dhyanesh Narayanan, and Mayukh Bhaowal. 2006. Investigation into trust for collaborative information repositories: A wikipedia case study. In In Proceedings of the Workshop on Models of Trust for the Web, pages 3–131. Donald Metzler and Tapas Kanungo. 2008. Machine learned sentence selection strategies for querybiased summarization. In Proceedings of SIGIR Learning to Rank Workshop. Besiki Stvilia, Michael B. Twidale, Linda C. Smith, and Les Gasser. 2005. Assessing information quality of a community-based encyclopedia. In Proceedings of the International Conference on Information Quality. Maggy Anastasia Suryanto, Ee Peng Lim, Aixin Sun, and Roger H. L. Chiang. 2009. Quality-aware collaborative question answering: methods and evaluation. In WSDM ’09: Proceedings of the Second ACM International Conference on Web Search and Data Mining, pages 142–151, New York, NY, USA. ACM. Ashwin Swaminathan, Cherian V. Mathew, and Darko Kirovski. 2009. Essential pages. In WI-IAT ’09: Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, pages 173–182, Washington, DC, USA. IEEE Computer Society. Changhu Wang, Feng Jing, Lei Zhang, and HongJiang Zhang. 2007. Learning query-biased web page summarization. In CIKM ’07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 555– 562, New York, NY, USA. ACM. Dingding Wang, Tao Li, Shenghuo Zhu, and Chris Ding. 2008. Multi-document summarization via sentence-level semantic analysis and symmetric matrix factorization. In SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 307–314, New York, NY, USA. ACM. Kai Wang, Zhaoyan Ming, and Tat-Seng Chua. 2009a. A syntactic tree matching approach to finding similar questions in community-based qa services. In SIGIR ’09: Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 187–194, New York, NY, USA. ACM. Xin-Jing Wang, Xudong Tu, Dan Feng, and Lei Zhang. 2009b. Ranking community answers by modeling question-answer relationships via analogical reasoning. In SIGIR ’09: Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 179–186, New York, NY, USA. ACM. Kam-Fai Wong, Mingli Wu, and Wenjie Li. 2008. Extractive summarization using supervised and semisupervised learning. In COLING ’08: Proceedings of the 22nd International Conference on Computational Linguistics, pages 985–992, Morristown, NJ, USA. Association for Computational Linguistics. Honglei Zeng, Maher A. Alhossaini, Li Ding, Richard Fikes, and Deborah L. McGuinness. 2006. Computing trust from revision history. In PST ’06: Proceedings of the 2006 International Conference on Privacy, Security and Trust, pages 1–1, New York, NY, USA. ACM. Liang Zhou, Chin Y. Lin, and Eduard Hovy. 2006. Summarizing answers for complicated questions. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC), Genoa, Italy. 769
2010
78
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 770–779, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A hybrid rule/model-based finite-state framework for normalizing SMS messages Richard Beaufort1 Sophie Roekhaut2 Louise-Amélie Cougnon1 Cédrick Fairon1 (1) CENTAL, Université catholique de Louvain – 1348 Louvain-la-Neuve, Belgium {richard.beaufort,louise-amelie.cougnon,cedrick.fairon}@uclouvain.be (2) TCTS Lab, Université de Mons – 7000 Mons, Belgium [email protected] Abstract In recent years, research in natural language processing has increasingly focused on normalizing SMS messages. Different well-defined approaches have been proposed, but the problem remains far from being solved: best systems achieve a 11% Word Error Rate. This paper presents a method that shares similarities with both spell checking and machine translation approaches. The normalization part of the system is entirely based on models trained from a corpus. Evaluated in French by 10-fold-cross validation, the system achieves a 9.3% Word Error Rate and a 0.83 BLEU score. 1 Introduction Introduced a few years ago, Short Message Service (SMS) offers the possibility of exchanging written messages between mobile phones. SMS has quickly been adopted by users. These messages often greatly deviate from traditional spelling conventions. As shown by specialists (Thurlow and Brown, 2003; Fairon et al., 2006; Bieswanger, 2007), this variability is due to the simultaneous use of numerous coding strategies, like phonetic plays (2m1 read ‘demain’, “tomorrow”), phonetic transcriptions (kom instead of ‘comme’, “like”), consonant skeletons (tjrs for ‘toujours’, “always”), misapplied, missing or incorrect separators (j esper for ‘j’espère’, “I hope”; j’croibi1k, instead of ‘je crois bien que’, “I am pretty sure that”), etc. These deviations are due to three main factors: the small number of characters allowed per text message by the service (140 bytes), the constraints of the small phones’ keypads and, last but not least, the fact that people mostly communicate between friends and relatives in an informal register. Whatever their causes, these deviations considerably hamper any standard natural language processing (NLP) system, which stumbles against so many Out-Of-Vocabulary words. For this reason, as noted by Sproat et al. (2001), an SMS normalization must be performed before a more conventional NLP process can be applied. As defined by Yvon (2008), “SMS normalization consists in rewriting an SMS text using a more conventional spelling, in order to make it more readable for a human or for a machine.” The SMS normalization we present here was developed in the general framework of an SMSto-speech synthesis system1. This paper, however, only focuses on the normalization process. Evaluated in French, our method shares similarities with both spell checking and machine translation. The machine translation-like module of the system performs the true normalization task. It is entirely based on models learned from an SMS corpus and its transcription, aligned at the character-level in order to get parallel corpora. Two spell checking-like modules surround the normalization module. The first one detects unambiguous tokens, like URLs or phone numbers, to keep them out of the normalization. The second one, applied on the normalized parts only, identifies non-alphabetic sequences, like punctuations, and labels them with the corresponding token. This greatly helps the system’s print module to follow the basic rules of typography. This paper is organized as follows. Section 2 proposes an overview of the state of the art. Section 3 presents the general architecture of our system, while Section 4 focuses on how we learn and combine our normalization models. Section 5 evaluates the system and compares it to 1The Vocalise project. See cental.fltr.ucl.ac.be/team/projects/vocalise/. 770 previous works. Section 6 draws conclusions and considers some future possible improvements of the method. 2 Related work As highlighted by Kobus et al. (2008b), SMS normalization, up to now, has been handled through three well-known NLP metaphors: spell checking, machine translation and automatic speech recognition. In this section, we only present the pros and cons of these approaches. Their results are given in Section 5, focused on our evaluation. The spell checking metaphor (Guimier de Neef et al., 2007; Choudhury et al., 2007; Cook and Stevenson, 2009) performs the normalization task on a word-per-word basis. On the assumption that most words should be correct for the purpose of communication, its principle is to keep InVocabulary words out of the correction process. Guimier de Neef et al. (2007) proposed a rulebased system that uses only a few linguistic resources dedicated to SMS, like specific lexicons of abbreviations. Choudhury et al. (2007) and Cook and Stevenson (2009) preferred to implement the noisy channel metaphor (Shannon, 1948), which assumes a communication process in which a sender emits the intended message W through an imperfect (noisy) communication channel, such that the sequence O observed by the recipient is a noisy version of the original message. On this basis, the idea is to recover the intended message W hidden behind the sequences of observations O, by maximizing: Wmax = arg max P(W|O) (1) = arg max P(O|W) P(W) P(O) where P(O) is ignored because constant, P(O|W) models the channel’s noise, and P(W) models the language of the source. Choudhury et al. (2007) implemented the noisy channel through a Hidden-Markov Model (HMM) able to handle both graphemic variants and phonetic plays as proposed by (Toutanova and Moore, 2002), while Cook and Stevenson (2009) enhanced the model by adapting the channel’s noise P(O|W, wf) according to a list of predefined observed word formations {wf}: stylistic variation, word clipping, phonetic abbreviations, etc. Whatever the system, the main limitation of the spell checking approach is the excessive confidence it places in word boundaries. The machine translation metaphor, which is historically the first proposed (Bangalore et al., 2002; Aw et al., 2006), considers the process of normalizing SMS as a translation task from a source language (the SMS) to a target language (its standard written form). This standpoint is based on the observation that, on the one side, SMS messages greatly differ from their standard written forms, and that, on the other side, most of the errors cross word boundaries and require a wide context to be handled. On this basis, Aw et al. (2006) proposed a statistical machine translation model working at the phrase-level, by splitting sentences into their k most probable phrases. While this approach achieves really good results, Kobus et al. (2008b) make the assertion that a phrase-based translation can hardly capture the lexical creativity observed in SMS messages. Moreover, the translation framework, which can handle many-to-many correspondences between sources and targets, exceeds the needs of SMS normalization, where the normalization task is almost deterministic. Based on this analysis, Kobus et al. (2008b) proposed to handle SMS normalization through an automatic speech recognition (ASR) metaphor. The starting point of this approach is the observation that SMS messages present a lot of phonetic plays that sometimes make the SMS word (sré, mwa) closer to its phonetic representation ([sKe], [mwa]) than to its standard written form (serai, “will be”, moi, “me”). Typically, an ASR system tries to discover the best word sequence within a lattice of weighted phonetic sequences. Applied to the SMS normalization task, the ASR metaphor consists in first converting the SMS message into a phone lattice, before turning it into a word-based lattice using a phoneme-to-grapheme dictionary. A language model is then applied on the word lattice, and the most probable word sequence is finally chosen by applying a best-path algorithm on the lattice. One of the advantages of the grapheme-to-phoneme conversion is its intrinsic ability to handle word boundaries. However, this step also presents an important drawback, raised by the authors themselves: it prevents next normalization steps from knowing what graphemes were in the initial sequence. 771 Our approach, which is detailed in Sections 3 and 4, shares similarities with both the spell checking approach and the machine translation principles, trying to combine the advantages of these methods, while leaving aside their drawbacks: like in spell checking systems, we detect unambiguous units of text as soon as possible and try to rely on word boundaries when they seem reliable enough; but like in the machine translation task, our method intrinsically handles word boundaries in the normalization process if needed. 3 Overview of the system 3.1 Tools in use In our system, all lexicons, language models and sets of rules are compiled into finite-state machines (FSMs) and combined with the input text by composition (◦). The reader who is not familiar with FSMs and their fundamental theoretical properties, like composition, is urged to consult the state-of-the-art literature (Roche and Schabes, 1997; Mohri and Riley, 1997; Mohri et al., 2000; Mohri et al., 2001). We used our own finite-state tools: a finite-state machine library and its associated compiler (Beaufort, 2008). In conformance with the format of the library, the compiler builds finite-state machines from weighted rewrite rules, weighted regular expressions and n-gram models. 3.2 Aims We formulated four constraints before fixing the system’s architecture. First, special tokens, like URLs, phones or currencies, should be identified as soon as possible, to keep them out of the normalization process. Second, word boundaries should be taken into account, as far as they seem reliable enough. The idea, here, is to base the decision on a learning able to catch frequent SMS sequences to include in a dedicated In-Vocabulary (IV) lexicon. Third, any other SMS sequence should be considered as Out-Of-Vocabulary (OOV), on which in-depth rewritings may be applied. Fourth, the basic rules of typography and typesettings should be applied on the normalized version of the SMS message. 3.3 Architecture The architecture depicted in Figure 1 directly relies on these considerations. In short, an SMS message first goes through three SMS modules, which normalize its noisy parts. Then, two standard NLP modules produce a morphosyntactic analysis of the normalized text. A last module, finally, takes advantage of this linguistic analysis either to print a text that follows the basic rules of typography, or to synthesize the corresponding speech signal. Because this paper focuses on the normalization task, the rest of this section only presents the SMS modules and the “smart print” output. The morphosyntactic analysis, made of state-of-the-art algorithms, is described in (Beaufort, 2008), and the text-to-speech synthesis system we use is presented in (Colotte and Beaufort, 2005). 3.3.1 SMS modules SMS preprocessing. This module relies on a set of manually-tuned rewrite rules. It identifies paragraphs and sentences, but also some SMS Modules SMS Preprocessing SMS Normalization SMS Postprocessing Standard NLP Modules Morphological analysis Contextual disambiguation TTS engine Smart print SMS message Standard written message Speech Figure 1: Architecture of the system 772 unambiguous tokens: URLs, phone numbers, dates, times, currencies, units of measurement and, last but not least in the context of SMS, smileys2. These tokens are kept out of the normalization process, while any other sequence of characters is considered – and labelled – as noisy. SMS normalization. This module only uses models learned from a training corpus (cf. Section 4). It involves three steps. First, an SMSdedicated lexicon look-up, which differentiates between known and unknown parts of a noisy token. Second, a rewrite process, which creates a lattice of weighted solutions. The rewrite model differs depending on whether the part to rewrite is known or not. Third, a combination of the lattice of solutions with a language model, and the choice of the best sequence of lexical units. At this stage, the normalization as such is completed. SMS postprocessing. Like the preprocessor, the postprocessor relies on a set of manuallytuned rewrite rules. The module is only applied on the normalized version of the noisy tokens, with the intention to identify any non-alphabetic sequence and to isolate it in a distinct token. At this stage, for instance, a point becomes a ‘strong punctuation’. Apart from the list of tokens already managed by the preprocessor, the postprocessor handles as well numeric and alphanumeric strings, fields of data (like bank account numbers), punctuations and symbols. 3.3.2 Smart print The smart print module, based on manually-tuned rules, checks either the kind of token (chosen by the SMS pre-/post-processing modules) or the grammatical category (chosen by the morphosyntactic analysis) to make the right typography choices, such as the insertion of a space after certain tokens (URLs, phone numbers), the insertion of two spaces after a strong punctuation (point, question mark, exclamation mark), the insertion of two carriage returns at the end of a paragraph, or the upper case of the initial letter at the beginning of the sentence. 2Our list contains about 680 smileys. 4 The normalization models 4.1 Overview of the normalization algorithm Our approach is an approximation of the noisy channel metaphor (cf. Section 2). It differs from this general framework, because we adapt the model of the channel’s noise depending on whether the noisy token (our sequence of observations) is In-Vocabulary or Out-OfVocabulary: P(O|W) =    PIV (O|W) if O ∈IV POOV (O|W) else (2) Indeed, our algorithm is based on the assumption that applying different normalization models to IV and OOV words should both improve the results and reduce the processing time. For this purpose, the first step of the algorithm consists in composing a noisy token T with an FST Sp whose task is to differentiate between sequences of IV words and sequences of OOV words, by labelling them with a special IV or OOV marker. The token is then split in n segments sgi according to these markers: {sg} = Split(T ◦Sp) (3) In a second step, each segment is composed with a rewrite model according to its kind: the IV rewrite model RIV for sequences of IV words, and the OOV rewrite model ROOV for sequences of OOV words: sg′ i =    sgi ◦RIV if sgi ∈IV sgi ◦ROOV else (4) All rewritten segments are then concatenated together in order to get back the complete token: T = ⊙n i=1(sg′ i) (5) where ⊙is the concatenation operator. The third and last normalization step is applied on a complete sentence S. All tokens Tj of S are concatenated together and composed with the lexical language model LM. The result of this composition is a word lattice, of which we take the most probable word sequence S′ by applying a best-path algorithm: S′ = BestPath( (⊙m j=1Tj) ◦LM ) (6) where m is the number of tokens of S. In S′, each noisy token Tj of S is mapped onto its most probable normalization. 773 4.2 The corpus alignment Our normalization models were trained on a French SMS corpus of 30,000 messages, gathered in Belgium, semi-automatically anonymized and manually normalized by the Catholic University of Louvain (Fairon and Paumier, 2006). Together, the SMS corpus and its transcription constitute parallel corpora aligned at the message-level. However, in order to learn pieces of knowledge from these corpora, we needed a string alignment at the character-level. One way of implementing this string alignment is to compute the edit-distance of two strings, which measures the minimum number of operations (substitutions, insertions, deletions) required to transform one string into the other (Levenshtein, 1966). Using this algorithm, in which each operation gets a cost of 1, two strings may be aligned in different ways with the same global cost. This is the case, for instance, for the SMS form kozer ([koze]) and its standard transcription causé (“talked”), as illustrated by Figure 2. However, from a linguistic standpoint, alignment (1) is preferable, because corresponding graphemes are aligned on their first character. In order to automatically choose this preferred alignment, we had to distinguish the three editoperations, according to the characters to be aligned. For that purpose, probabilities were required. Computing probabilities for each operation according to the characters to be aligned was performed through an iterative algorithm described in (Cougnon and Beaufort, 2009). In short, this algorithm gradually learns the best way of aligning strings. On our parallel corpora, it converged after 7 iterations and provided us with a result from which the learning could start. (1) ko_ser (2) k_oser causé_ causé_ (3) ko_ser (4) k_oser caus_é caus_é Figure 2: Different equidistant alignments, using a standard edit-cost of 1. Underscores (‘_’) mean insertion in the upper string, and deletion in the lower string. 4.3 The split model Sp In natural language processing, a word is commonly defined as “a sequence of alphabetic characters between separators”, and an IV word is simply a word that belongs to the lexicon in use. In SMS messages however, separators are surely indicative, but not reliable. For this reason, our definition of the word is far from the previous one, and originates from the string alignment. After examining our parallel corpora aligned at the character-level, we decided to consider as a word “the longest sequence of characters parsed without meeting the same separator on both sides of the alignment”. For instance, the following alignment J esper_ k___tu va_ J’espère que tu vas (I hope that you will) is split as follows according to our definition: J esper_ k___tu va_ J’espère que tu vas since the separator in “J esper” is different from its transcription, and “ktu” does not contain any separator. Thus, this SMS sequence corresponds to 3 SMS words: [J esper], [ktu] and [va]. A first parsing of our parallel corpora provided us with a list of SMS sequences corresponding to our IV lexicon. The FST Sp is built on this basis: Sp = ( S∗(I|O) ( S+(I|O) )∗S∗) ◦G (7) where: • I is an FST corresponding to the lexicon, in which IV words are mapped onto the IV marker. • O is the complement of I3. In this OOV lexicon, OOV sequences are mapped onto the OOV marker. • S is an FST corresponding to the list of separators (any non-alphabetic and nonnumeric character), mapped onto a SEP marker. 3Actually, the true complement of I accepts sequences with separators, while these sequences were removed from O. 774 • G is an FST able to detect consecutive sequences of IV (resp. OOV) words, and to group them under a unique IV (resp. OOV) marker. By gathering sequences of IVs and OOVs, SEP markers disappear from Sp. Figure 3 illustrates the composition of Sp with the SMS sequence J esper kcv b1 (J’espère que ça va bien, “I hope you are well”). For the example, we make the assumption that kcv was never seen during the training. e J s p e r k c v b 1 ' ' ' ' ' ' IV IV OOV Figure 3: Application of the split model Sp. The OOV sequence starts and ends with separators. 4.4 The IV rewrite model RIV This model is built during a second parsing of our parallel corpora. In short, the parsing simply gathers all possible normalizations for each SMS sequence put, by the first parsing, in the IV lexicon. Contrary to the first parsing, this second one processes the corpus without taking separators into account, in order to make sure that all possible normalizations are collected. Each normalization ¯w for a given SMS sequence w is weighted as follows: p( ¯w|w) = Occ( ¯w, w) Occ(w) (8) where Occ(x) is the number of occurrences of x in the corpus. The FST RIV is then built as follows: RIV = SIV ∗IVR ( SIV + IVR )∗SIV ∗ (9) where: • IVR is a weighted lexicon compiled into an FST, in which each IV sequence is mapped onto the list of its possible normalizations. • SIV is a weighted lexicon of separators, in which each separator is mapped onto the list of its possible normalizations. The deletion is often one of the possible normalization of a separator. Otherwise, the deletion is added and is weighted by the following smoothed probability: p(DEL|w) = 0.1 Occ(w) + 0.1 (10) 4.5 The OOV rewrite model ROOV In contrast to the other models, this one is not a regular expression made of weighted lexicons. It corresponds to a set of weighted rewrite rules (Chomsky and Halle, 1968; Johnson, 1972; Mohri and Sproat, 1996) learned from the alignment. Developed in the framework of generative phonology, rules take the form φ →ψ : λ _ ρ / w (11) which means that the replacement φ →ψ is only performed when φ is surrounded by λ on the left and ρ on the right, and gets the weight w. However, in our case, rules take the simpler form φ →ψ / w (12) which means that the replacement φ →ψ is always performed, whatever the context. Inputs of our rules (φ) are sequences of 1 to 5 characters taken from the SMS side of the alignment, while outputs (ψ) are their corresponding normalizations. Our rules are sorted in the reverse order of the length of their inputs: rules with longer inputs come first in the list. Long-to-short rule ordering reduces the number of proposed normalizations for a given SMS sequence for two reasons: 1. the firing of a rule with a longer input blocks the firing of any shorter sub-rule. This is due to a constraint expressed on lists of rewrite rules: a given rule may be applied only if no more specific and relevant rule has been met higher in the list; 2. a rule with a longer input usually has fewer alternative normalizations than a rule with a shorter input does, because the longer SMS sequence likely occurred paired with fewer alternative normalizations in the training corpus than did the shorter SMS sequence. Among the wide set of possible sequences of 2 to 5 characters gathered from the corpus, we only kept in our list of rules the sequences that allowed at least one normalization solely made of IV words. It is important to notice that here, we refer to the standard notion of IV word: while gathering the candidate sequences from the corpus, we systematically checked each word of the normalizations against a lexicon of French 775 standard written forms. The lexicon we used contains about 430,000 inflected forms and is derived from Morlex4, a French lexical database. Figure 4 illustrates these principles by focusing on 3 input sequences: aussi, au and a. As shown by the Figure, all rules of a set dedicated to the same input sequence (for instance, aussi) are optional (?→), except the last one, which is obligatory (→). In our finite-state compiler, this convention allows the application of all concurrent normalizations on the same input sequence, as depicted in Figure 5. In our real list of OOV rules, the input sequence a corresponds to 231 normalizations, while au accepts 43 normalizations and aussi, only 3. This highlights the interest, in terms of efficiency, of the long-to-short rule ordering. 4.6 The language model Our language model is an n-gram of lexical forms, smoothed by linear interpolation (Chen and Goodman, 1998), estimated on the normalized part of our training corpus and compiled into a weighted FST LMw. At this point, this FST cannot be combined with our other models, because it works on lexical units and not on characters. This problem is solved by composing LMw with another FST L, which represents a lexicon mapping each input word, considered as a string of characters, onto the same output words, but considered here as a lexical unit. Lexical units are then permanently removed from the language model by keeping only the first projection (the input side) of the composition: LM = FirstProjection( L ◦LMw ) (13) In this model, special characters, like punctuations or symbols, are represented by their categories (light, medium and strong punctuations, question mark, symbol, etc.), while special tokens, like URLs or phone numbers, are handled as token values (URL, phone, etc.) instead of as sequences of characters. This reduces the complexity of the model. As we explained earlier, tokens of a same sentence S are concatenated together at the end of the second normalization step. During this concatenation process, sequences corresponding to special tokens are automatically replaced by their token values. Special characters, however, 4See http://bach.arts.kuleuven.be/pmertens/. "aussi" ?-> "au si" / 8.4113 (*) "aussi" ?-> "ou si" / 6.6743 (*) "aussi" -> "aussi" / 0.0189 (*) ... ... "au" ?-> "ow" / 14.1787 ... "au" ?-> "ôt" / 12.5938 "au" ?-> "du" / 12.1787 (*) "au" ?-> "o" / 11.8568 ... "au" ?-> "on" / 10.8568 (*) ... "au" ?-> "aud" / 9.9308 "au" ?-> "aux" / 6.1731 (*) "au" -> "au" / 0.0611 (*) ... ... "a" ?-> "a d" / 17.8624 "a" ?-> "ation" / 17.8624 "a" ?-> "âts" / 17.8624 ... "a" ?-> "ablement" / 16.8624 "a" ?-> "anisation" / 16.8624 ... "a" ?-> "u" / 15.5404 "a" ?-> "y a" / 15.5404 ... "a" ?-> "abilité" / 13.4029 "a" ?-> "à-" / 12.1899 "a" ?-> "ar" / 11.5225 "a" ?-> \DEL / 9.1175 "a" ?-> "ça" / 6.2019 "a" ?-> "à" / 3.5013 "a" -> "a" / 0.3012 Figure 4: Samples from the list of OOV rules. Rules’ weights are negative logarithms of probabilities: smaller weights are thus better. Asterisks indicate normalizations solely made of French IV words. a a:o/6.67 u u !:" "/8.41 s/0.02 s s i !:" " Figure 5: Application of the OOV rules on the input sequence aussi. All normalizations corresponding to this sequence were allowed, while rules corresponding to shorter input sequences were ignored. 776 are still present in S. For this reason, S is first composed with an FST Reduce, which maps each special character onto its corresponding category: S ◦Reduce ◦LM (14) 5 Evaluation The performance and the efficiency of our system were evaluated on a MacBook Pro with a 2.4 GHz Intel Core 2 Duo CPU, 4 GB 667 MHz DDR2 SDRAM, running Mac OS X version 10.5.8. The evaluation was performed on the corpus of 30,000 French SMS presented in Section 4.2, by ten-fold cross-validation (Kohavi, 1995). The principle of this method of evaluation is to split the initial corpus into 10 subsets of equal size. The system is then trained 10 times, each time leaving out one of the subsets from the training corpus, but using only this omitted subset as test corpus. The language model of the evaluation is a 3-gram. We did not try a 4-gram. This choice was motivated by the experiments of Kobus et al. (2008a), who showed on a French corpus comparable to ours that, if using a larger language model is always rewarded, the improvement quickly decreases with every higher level and is already quite small between 2-gram and 3-gram. Table 1 presents the results in terms of efficiency. The system seems efficient, while we cannot compare it with other methods, which did not provide us with this information. Table 2, part 1, presents the performance of our approach (Hybrid) and compares it to a trivial copy-paste (Copy). The system was evaluated in terms of BLEU score (Papineni et al., 2001), Word Error Rate (WER) and Sentence Error Rate (SER). Concerning WER, the table presents the distribution between substitutions (Sub), deletions (Del) and insertions (Ins). The copy-paste results just inform about the real deviation of our corpus from the traditional spelling conventions, and highlight the fact that our system is still at pains to significantly reduce the SER, while results in terms of WER and BLEU score are quite encouraging. Table 2, part 2, provides the results of the state-of-the-art approaches. The only results truly comparable to ours are those of Guimier de Neef et al. (2007), who evaluated their approach on the same corpus as ours5; clearly, our method 5They performed an evaluation without ten-fold crossmean dev. bps 1836.57 159.63 ms/SMS (140b) 76.23 22.34 Table 1: Efficiency of the system. outperforms theirs. Our results also seem a bit better than those of Kobus et al. (2008a), although the comparison with this system, also evaluated in French, is less easy: they combined the French corpus we used with another one and performed a single validation, using a bigger training corpus (36.704 messages) for a test corpus quite similar to one of our subsets (2.998 SMS). Other systems were evaluated in English, and results are more difficult to compare; at least, our results seem in line with them. The analysis of the normalizations produced by our system pointed out that, most often, errors are contextual and concern the gender (quel(le), “what”), the number (bisou(s), “kiss”), the person ([tu t’]inquiète(s), “you are worried”) or the tense (arrivé/arriver, “arrived”/“to arrive”). That contextual errors are frequent is not surprising. In French, as mentioned by Kobus et al. (2008b), ngram models are unable to catch this information, as it is generally out of their scope. On the other hand, this analysis confirmed our initial assumptions. First, special tokens (URLs, phones, etc.) are not modified. Second, agglutinated words are generally split (Pensa ms →Pense à mes, “think to my”), while misapplied separators tend to be deleted (G t →J’étais, “I was”). Of course, we also found some errors at word boundaries ([il] l’arrange →[il] la range, “[he] arranges” →“[he] pits in order”), but they were fairly rare. 6 Conclusion and perspectives In this paper, we presented an SMS normalization framework based on finite-state machines and developed in the context of an SMS-to-speech synthesis system. With the intention to avoid wrong modifications of special tokens and to handle word boundaries as easily as possible, we designed a method that shares similarities with both spell checking and machine translation. Our validation, because their rule-based system did not need any training. 777 1. Our approach 2. State of the art Ten-fold cross-validation, French French English Copy Hybrid Guimier Kobus 2008 Aw Choud. Cook ¯x σ ¯x σ 2007 1 2∗ 2006 2006∗∗ 2009∗∗ Sub. 25.90 1.65 6.69 0.45 11.94 Del. 8.24 0.74 1.89 0.31 2.36 Ins. 0.46 0.08 0.72 0.10 2.21 WER 34.59 2.37 9.31 0.78 16.51 10.82 41.00 44.60 SER 85.74 0.87 65.07 1.85 76.05 BLEU 0.47 0.03 0.83 0.01 0.736 0.8 0.81 ¯x=mean, σ=standard deviation Table 2: Performance of the system. (∗) Kobus 2008-1 corresponds to the ASR-like system, while Kobus 2008-2 is a combination of this system with a series of open-source machine translation toolkits. (∗∗) Scores obtained on noisy data only, out of the sentence’s context. normalization algorithm is original in two ways. First, it is entirely based on models learned from a training corpus. Second, the rewrite model applied to a noisy sequence differs depending on whether this sequence is known or not. Evaluated by ten-fold cross-validation, the system seems efficient, and the performance in terms of BLEU score and WER are quite encouraging. However, the SER remains too high, which emphasizes the fact that the system needs several improvements. First of all, the model should take phonetic similarities into account, because SMS messages contain a lot of phonetic plays. The phonetic model, for instance, should know that o, au, eau, . . . , aux can all be pronounced [o], while è, ais, ait, . . . , aient are often pronounced [E]. However, unlike Kobus et al. (2008a), we feel that this model must avoid the normalization step in which the graphemic sequence is converted into phonemes, because this conversion prevents the next steps from knowing which graphemes were in the initial sequence. Instead, we propose to learn phonetic similarities from a dictionary of words with phonemic transcriptions, and to build graphemes-to-graphemes rules. These rules could then be automatically weighted, by learning their frequencies from our aligned corpora. Furthermore, this model should be able to allow for timbre variation, like [e]–[E], in order to allow similarities between graphemes frequently confused in French, like ai ([e]) and ais/ait/aient ([E]). Last but not least, the graphemes-tographemes rules should be contextualized, in order to reduce the complexity of the model. It would also be interesting to test the impact of another lexical language model, learned on nonSMS sentences. Indeed, the lexical model must be learned from sequences of standard written forms, an obvious prerequisite that involves a major drawback when the corpus is made of SMS sentences: the corpus must first be transcribed, an expensive process that reduces the amount of data on which the model will be trained. For this reason, we propose to learn a lexical model from non-SMS sentences. However, the corpus of external sentences should still share two important features with the SMS language: it should mimic the oral language and be as spontaneous as possible. With this in mind, our intention is to gather sentences from Internet forums. But not just any forum, because often forums share another feature with the SMS language: their language is noisy. Thus, the idea is to choose a forum asking its members to pay attention to spelling mistakes and grammatical errors, and to avoid the use of the SMS language. Acknowledgments This research was funded by grants no. 716619 and 616422 from the Walloon Region of Belgium, and supported by the Multitel research centre. We sincerely thank our anonymous reviewers for their insightful and helpful comments on the first version of this paper. References AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for SMS text 778 normalization. In Proc. COLING/ACL 2006. Srinivas Bangalore, Vanessa Murdock, and Giuseppe Riccardi. 2002. Bootstrapping bilingual data using consensus translation for a multilingual instant messaging system. In Proc. the 19th international conference on Computational linguistics, pages 1– 7, Morristown, NJ, USA. Richard Beaufort. 2008. Application des machines à etats finis en synthèse de la parole. Sélection d’unités non uniformes et correction orthographique. Ph.D. thesis, FUNDP, Namur, Belgium, March. 605 pages. Markus Bieswanger. 2007. abbrevi8 or not 2 abbrevi8: A contrastive analysis of different space and timesaving strategies in English and German text messages. In Texas Linguistics Forum, volume 50. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report 10-98, Computer Science Group, Harvard University. Noam Chomsky and Morris Halle. 1968. The sound pattern of English. Harper and Row, New York, NY. Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar1, and Anupam Basu. 2007. Investigation and modeling of the structure of texting language. International Journal on Document Analysis and Recognition, 10(3):157– 174. Vincent Colotte and Richard Beaufort. 2005. Linguistic features weighting for a text-to-speech system without prosody model. In Proc. Interspeech’05, pages 2549–2552. Paul Cook and Suzanne Stevenson. 2009. An unsupervised model for text message normalization. In Proc. Workshop on Computational Approaches to Linguistic Creativity, pages 71–78. Louise-Amélie Cougnon and Richard Beaufort. 2009. SSLD: a French SMS to standard language dictionary. In Sylviane Granger and Magali Paquot, editors, Proc. eLexicography in the 21st century: New applications, new challenges (eLEX 2009). Presses Universitaires de Louvain. To appear. Cédrick Fairon and Sébastien Paumier. 2006. A translated corpus of 30,000 French SMS. In Proc. LREC 2006, May. Cécrick. Fairon, Jean R. Klein, and Sébastien Paumier. 2006. Le langage SMS: étude d’un corpus informatisé à partir de l’enquête Faites don de vos SMS à la science. Presses Universitaires de Louvain. 136 pages. Emilie Guimier de Neef, Arnaud Debeurme, and Jungyeul Park. 2007. TILT correcteur de SMS: évaluation et bilan quantitatif. In Actes de TALN 2007, pages 123–132, Toulouse, France. C. Douglas Johnson. 1972. Formal aspects of phonological description. Mouton, The Hague. Catherine Kobus, François Yvon, and Géraldine Damnati. 2008a. Normalizing SMS: are two metaphors better than one? In Proc. COLING 2008, pages 441–448, Manchester, UK. Catherine Kobus, François Yvon, and Géraldine Damnati. 2008b. Transcrire les SMS comme on reconnaît la parole. In Actes de la Conférence sur le Traitement Automatique des Langues (TALN’08), pages 128–138, Avignon, France. Ron Kohavi. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proc. IJCAI’95, pages 1137–1143. Vladimir Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics, 10:707–710. Mehryar Mohri and Michael Riley. 1997. Weighted determinization and minimization for large vocabulary speech recognition. In Proc. Eurospeech’97, pages 131–134. Mehryar Mohri and Richard Sproat. 1996. An efficient compiler for weighted rewrite rules. In Proc. ACL’96, pages 231–238. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2000. The design principles of a weighted finitestate transducer library. Theoretical Computer Science, 231(1):17–32. Mehryar Mohri, Fernando Pereira, and Michael Riley. 2001. Generic ϵ-removal algorithm for weighted automata. Lecture Notes in Computer Science, 2088:230–242. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. In Proc. ACL 2001, pages 311–318. Emmanuel Roche and Yves Schabes, editors. 1997. Finite-state language processing. MIT Press, Cambridge. Claude E. Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal, 27:379–423. Richard Sproat, A.W. Black, S. Chen, S. Kumar, M. Ostendorf, and C. Richards. 2001. Normalization of non-standard words. Computer Speech & Language, 15(3):287–333. Crispin Thurlow and Alex Brown. 2003. Generation txt? The sociolinguistics of young people’s textmessaging. Discourse Analysis Online, 1(1). Kristina Toutanova and Robert C. Moore. 2002. Pronunciation modeling for improved spelling correction. In Proc. ACL’02, pages 144–151. François Yvon. 2008. Reorthography of SMS messages. Technical Report 2008, LIMSI/CNRS, Orsay, France. 779
2010
79
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 69–78, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Learning to Adapt to Unknown Users: Referring Expression Generation in Spoken Dialogue Systems Srinivasan Janarthanam School of Informatics University of Edinburgh [email protected] Oliver Lemon Interaction Lab Mathematics and Computer Science (MACS) Heriot-Watt University [email protected] Abstract We present a data-driven approach to learn user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical ‘jargon’ names of the domain entities. In such cases, dialogue systems must be able to model the user’s (lexical) domain knowledge and use appropriate referring expressions. We present a reinforcement learning (RL) framework in which the system learns REG policies which can adapt to unknown users online. Furthermore, unlike supervised learning methods which require a large corpus of expert adaptive behaviour to train on, we show that effective adaptive policies can be learned from a small dialogue corpus of non-adaptive human-machine interaction, by using a RL framework and a statistical user simulation. We show that in comparison to adaptive hand-coded baseline policies, the learned policy performs significantly better, with an 18.6% average increase in adaptation accuracy. The best learned policy also takes less dialogue time (average 1.07 min less) than the best hand-coded policy. This is because the learned policies can adapt online to changing evidence about the user’s domain expertise. 1 Introduction We present a reinforcement learning (Sutton and Barto, 1998) framework to learn user-adaptive referring expression generation policies from datadriven user simulations. A user-adaptive REG policy allows the system to choose appropriate expressions to refer to domain entities in a dialogue Jargon: Please plug one end of the broadband cable into the broadband filter. Descriptive: Please plug one end of the thin white cable with grey ends into the small white box. Table 1: Referring expression examples for 2 entities (from the corpus) setting. For instance, in a technical support conversation, the system could choose to use more technical terms with an expert user, or to use more descriptive and general expressions with novice users, and a mix of the two with intermediate users of various sorts (see examples in Table 1). In natural human-human conversations, dialogue partners learn about each other and adapt their language to suit their domain expertise (Issacs and Clark, 1987). This kind of adaptation is called Alignment through Audience Design (Clark and Murphy, 1982; Bell, 1984). We assume that users are mostly unknown to the system and therefore that a spoken dialogue system (SDS) must be capable of observing the user’s dialogue behaviour, modelling his/her domain knowledge, and adapting accordingly, just like human interlocutors. Rule-based and supervised learning approaches to user adaptation in SDS have been proposed earlier (Cawsey, 1993; Akiba and Tanaka, 1994). However, such methods require expensive resources such as domain experts to hand-code the rules, or a corpus of expertlayperson interactions to train on. In contrast, we present a corpus-driven framework using which a user-adaptive REG policy can be learned using RL from a small corpus of non-adaptive humanmachine interaction. We show that these learned policies perform better than simple hand-coded adaptive policies in terms of accuracy of adaptation and dialogue 69 time. We also compared the performance of policies learned using a hand-coded rule-based simulation and a data-driven statistical simulation and show that data-driven simulations produce better policies than rule-based ones. In section 2, we present some of the related work. Section 3 presents the dialogue data that we used to train the user simulation. Section 4 and section 5 describe the dialogue system framework and the user simulation models. In section 6, we present the training and in section 7, we present the evaluation for different REG policies. 2 Related work There are several ways in which natural language generation (NLG) systems adapt to users. Some of them adapt to a user’s goals, preferences, environment and so on. Our focus in this study is restricted to the user’s lexical domain expertise. Several NLG systems adapt to the user’s domain expertise at different levels of generation text planning (Paris, 1987), complexity of instructions (Dale, 1989), referring expressions (Reiter, 1991), and so on. Some dialogue systems, such as COMET, have also incorporated NLG modules that present appropriate levels of instruction to the user (McKeown et al., 1993). However, in all the above systems, the user’s knowledge is assumed to be accurately represented in an initial user model using which the system adapts its language. In contrast to all these systems, our adaptive REG policy knows nothing about the user when the conversation starts. Rule-based and supervised learning approaches have been proposed to learn and adapt during the conversation dynamically. Such systems learned from the user at the start and later adapted to the domain knowledge of the users. However, they either require expensive expert knowledge resources to hand-code the inference rules (Cawsey, 1993) or large corpus of expert-layperson interaction from which adaptive strategies can be learned and modelled, using methods such as Bayesian networks (Akiba and Tanaka, 1994). In contrast, we present an approach that learns in the absence of these expensive resources. It is also not clear how supervised and rule-based approaches choose between when to seek more information and when to adapt. In this study, we show that using reinforcement learning this decision is learned automatically. Reinforcement Learning (RL) has been successfully used for learning dialogue management policies since (Levin et al., 1997). The learned policies allow the dialogue manager to optimally choose appropriate dialogue acts such as instructions, confirmation requests, and so on, under uncertain noise or other environment conditions. There have been recent efforts to learn information presentation and recommendation strategies using reinforcement learning (Rieser and Lemon, 2009; Hernandez et al., 2003; Rieser and Lemon, 2010), and joint optimisation of Dialogue Management and NLG using hierarchical RL has been proposed by (Lemon, 2010). In contrast, we present a framework to learn to choose appropriate referring expressions based on a user’s domain knowledge. Earlier, we reported a proof-of-concept work using a hand-coded rule-based user simulation (Janarthanam and Lemon, 2009c). 3 The Wizard-of-Oz Corpus We use a corpus of technical support dialogues collected from real human users using a Wizardof-Oz method (Janarthanam and Lemon, 2009b). The corpus consists of 17 dialogues from users who were instructed to physically set up a home broadband connection using objects like a wireless modem, cables, filters, etc. They listened to the instructions from the system and carried them out using the domain objects laid in front of them. The human ‘wizard’ played the role of only an interpreter who would understand what the user said and annotate it as a dialogue act. The set-up examined the effect of using three types of referring expressions (jargon, descriptive, and tutorial), on the users. Out of the 17 dialogues, 6 used a jargon strategy, 6 used a descriptive strategy, and 5 used a tutorial strategy1. The task had reference to 13 domain entities, mentioned repeatedly in the dialogue. In total, there are 203 jargon, 202 descriptive and 167 tutorial referring expressions. Interestingly, users who weren’t acquainted with the domain objects requested clarification on some of the referring expressions used. The dialogue exchanges between the user and system were logged in the form of dialogue acts and the system’s choices of referring expressions. Each user’s knowledge of domain entities was recorded both before and after the task and each user’s interac1The tutorial strategy uses both jargon and descriptive expressions together. 70 tions with the environment were recorded. We use the dialogue data, pre-task knowledge tests, and the environment interaction data to train a user simulation model. Pre and post-task test scores were used to model the learning behaviour of the users during the task (see section 5). The corpus also recorded the time taken to complete each dialogue task. We used these data to build a regression model to calculate total dialogue time for dialogue simulations. The strategies were never mixed (with some jargon, some descriptive and some tutorial expressions) within a single conversation. Therefore, please note that the strategies used for data collection were not adaptive and the human ‘wizard’ has no role in choosing which referring expression to present to the user. Due to this fact, no user score regarding adaptation was collected. We therefore measure adaptation objectively as explained in section 6.1. 4 The Dialogue System In this section, we describe the different modules of the dialogue system. The interaction between the different modules is shown in figure 1 (in learning mode). The dialogue system presents the user with instructions to setup a broadband connection at home. In the Wizard of Oz setup, the system and the user interact using speech. However, in our machine learning setup, they interact at the abstract level of dialogue actions and referring expressions. Our objective is to learn to choose the appropriate referring expressions to refer to the domain entities in the instructions. Figure 1: System User Interaction (learning) 4.1 Dialogue Manager The dialogue manager identifies the next instruction (dialogue act) to give to the user based on the dialogue management policy πdm. Since, in this study, we focus only on learning the REG policy, the dialogue management is coded in the form of a finite state machine. In this dialogue task, the system provides two kinds of instructions - observation and manipulation. For observation instructions, users observe the environment and report back to the system, and for the manipulation instructions (such as plugging in a cable in to a socket), they manipulate the domain entities in the environment. When the user carries out an instruction, the system state is updated and the next instruction is given. Sometimes, users do not understand the referring expressions used by the system and then ask for clarification. In such cases, the system provides clarification on the referring expression (provide clar), which is information to enable the user to associate the expression with the intended referent. The system action As,t (t denoting turn, s denoting system) is therefore to either give the user the next instruction or a clarification. When the user responds in any other way, the instruction is simply repeated. The dialogue manager is also responsible for updating and managing the system state Ss,t (see section 4.2). The system interacts with the user by passing both the system action As,t and the referring expressions RECs,t (see section 4.3). 4.2 The dialogue state The dialogue state Ss,t is a set of variables that represent the current state of the conversation. In our study, in addition to maintaining an overall dialogue state, the system maintains a user model UMs,t which records the initial domain knowledge of the user. It is a dynamic model that starts with a state where the system does not have any idea about the user. As the conversation progresses, the dialogue manager records the evidence presented to it by the user in terms of his dialogue behaviour, such as asking for clarification and interpreting jargon. Since the model is updated according to the user’s behaviour, it may be inaccurate if the user’s behaviour is itself uncertain. So, when the user’s behaviour changes (for instance, from novice to expert), this is reflected in the user model during the conversation. Hence, unlike previous studies mentioned in section 2, the user model used in this system is not always an accurate model of the user’s knowledge and reflects a level of uncertainty about the user. 71 Each jargon referring expression x is represented by a three valued variable in the dialogue state: user knows x. The three values that each variable takes are yes, no, not sure. The variables are updated using a simple user model update algorithm. Initially each variable is set to not sure. If the user responds to an instruction containing the referring expression x with a clarification request, then user knows x is set to no. Similarly, if the user responds with appropriate information to the system’s instruction, the dialogue manager sets user knows x is set to yes. The dialogue manager updates the variables concerning the referring expressions used in the current system utterance appropriately after the user’s response each turn. The user may have the capacity to learn jargon. However, only the user’s initial knowledge is recorded. This is based on the assumption that an estimate of the user’s knowledge helps to predict the user’s knowledge of the rest of the referring expressions. Another issue concerning the state space is its size. Since, there are 13 entities and we only model the jargon expressions, the state space size is 313. 4.3 REG module The REG module is a part of the NLG module whose task is to identify the list of domain entities to be referred to and to choose the appropriate referring expression for each of the domain entities for each given dialogue act. In this study, we focus only on the production of appropriate referring expressions to refer to domain entities mentioned in the dialogue act. It chooses between the two types of referring expressions - jargon and descriptive. For example, the domain entity broadband filter can be referred to using the jargon expression “broadband filter” or using the descriptive expression “small white box”2. We call this the act of choosing the REG action. The tutorial strategy was not investigated here since the corpus analysis showed tutorial utterances to be very time consuming. In addition, they do not contribute to the adaptive behaviour of the system. The REG module operates in two modes - learning and evaluation. In the learning mode, the REG module is the learning agent. The REG module learns to associate dialogue states with optimal REG actions. This is represented by a REG 2We will use italicised forms to represent the domain entities (e.g. broadband filter) and double quotes to represent the referring expressions (e.g. “broadband filter”). policy πreg : UMs,t →RECs,t, which maps the states of the dialogue (user model) to optimal REG actions. The referring expression choices RECs,t is a set of pairs identifying the referent R and the type of expression T used in the current system utterance. For instance, the pair (broadband filter, desc) represents the descriptive expression “small white box”. RECs,t = {(R1, T1), ..., (Rn, Tn)} In the evaluation mode, a trained REG policy interacts with unknown users. It consults the learned policy πreg to choose the referring expressions based on the current user model. 5 User Simulations In this section, we present user simulation models that simulate the dialogue behaviour of a real human user. These external simulation models are different from internal user models used by the dialogue system. In particular, our model is the first to be sensitive to a system’s choices of referring expressions. The simulation has a statistical distribution of in-built knowledge profiles that determines the dialogue behaviour of the user being simulated. If the user does not know a referring expression, then he is more likely to request clarification. If the user is able to interpret the referring expressions and identify the references then he is more likely to follow the system’s instruction. This behaviour is simulated by the action selection models described below. Several user simulation models have been proposed for use in reinforcement learning of dialogue policies (Georgila et al., 2005; Schatzmann et al., 2006; Schatzmann et al., 2007; Ai and Litman, 2007). However, they are suited only for learning dialogue management policies, and not natural language generation policies. Earlier, we presented a two-tier simulation trained on data precisely for REG policy learning (Janarthanam and Lemon, 2009a). However, it is not suited for training on small corpus like the one we have at our disposal. In contrast to the earlier model, we now condition the clarification requests on the referent class rather than the referent itself to handle data sparsity problem. The user simulation (US) receives the system action As,t and its referring expression choices RECs,t at each turn. The US responds with a user action Au,t (u denoting user). This can either be a clarification request (cr) or an instruction 72 response (ir). We used two kinds of action selection models: corpus-driven statistical model and hand-coded rule-based model. 5.1 Corpus-driven action selection model In the corpus-driven model, the US produces a clarification request cr based on the class of the referent C(Ri), type of the referring expression Ti, and the current domain knowledge of the user for the referring expression DKu,t(Ri, Ti). Domain entities whose jargon expressions raised clarification requests in the corpus were listed and those that had more than the mean number of clarification requests were classified as difficult and others as easy entities (for example, “power adaptor” is easy - all users understood this expression, “broadband filter” is difficult). Clarification requests are produced using the following model. P(Au,t = cr(Ri, Ti)|C(Ri), Ti, DKu,t(Ri, Ti)) where (Ri, Ti) ∈RECs,t One should note that the actual literal expression is not used in the transaction. Only the entity that it is referring to (Ri) and its type (Ti) are used. However, the above model simulates the process of interpreting and resolving the expression and identifying the domain entity of interest in the instruction. The user identification of the entity is signified when there is no clarification request produced (i.e. Au,t = none). When no clarification request is produced, the environment action EAu,t is generated using the following model. P(EAu,t|As,t) if Au,t! = cr(Ri, Ti) Finally, the user action is an instruction response which is determined by the system action As,t. Instruction responses can be different in different conditions. For an observe and report instruction, the user issues a provide info action and for a manipulation instruction, the user responds with an acknowledgement action and so on. P(Au,t = ir|EAu,t, As,t) All the above models were trained on our corpus data using maximum likelihood estimation and smoothed using a variant of Witten-Bell discounting. According to the data, clarification requests are much more likely when jargon expressions are used to refer to the referents that belong to the difficult class and which the user doesn’t livebox = 1 power adaptor = 1 wall phone socket = 1 broadband filter = 0 broadband cable = 0 ethernet cable = 1 lb power light = 1 lb power socket = 1 lb broadband light = 0 lb ethernet light = 0 lb adsl socket = 0 lb ethernet socket = 0 pc ethernet socket = 1 Table 2: Domain knowledge: an Intermediate User know about. When the system uses expressions that the user knows, the user generally responds to the instruction given by the system. These user simulation models have been evaluated and found to produce behaviour that is very similar to the original corpus data, using the Kullback-Leibler divergence metric (Cuayahuitl, 2009). 5.2 Rule-based action selection model We also built a rule-based simulation using the above models but where some of the parameters were set manually instead of estimated from the data. The purpose of this simulation is to investigate how learning with a data-driven statistical simulation compares to learning with a simple hand-coded rule-based simulation. In this simulation, the user always asks for a clarification when he does not know a jargon expression (regardless of the class of the referent) and never does this when he knows it. This enforces a stricter, more consistent behaviour for the different knowledge patterns, which we hypothesise should be easier to learn to adapt to, but may lead to less robust REG policies. 5.3 User Domain knowledge The user domain knowledge is initially set to one of several models at the start of every conversation. The models range from novices to experts which were identified from the corpus using k-means clustering. The initial knowledge base (DKu,initial) for an intermediate user is shown in table 2. A novice user knows only “power adaptor”, and an expert knows all the jargon expressions. We assume that users can interpret the descriptive expressions and resolve their references. Therefore, they are not explicitly represented. We only code the user’s knowledge of jargon expressions. This is represented by a boolean variable for each domain entity. 73 Corpus data shows that users can learn jargon expressions during the conversation. The user’s domain knowledge DKu is modelled to be dynamic and is updated during the conversation. Based on our data, we found that when presented with clarification on a jargon expression, users always learned the jargon. if As,t = provide clar(Ri, Ti) DKu,t+1(Ri, Ti) ←1 Users also learn when jargon expressions are repeatedly presented to them. Learning by repetition follows the pattern of a learning curve - the greater the number of repetitions #(Ri, Ti), the higher the likelihood of learning. This is modelled stochastically based on repetition using the parameter #(Ri, Ti) as follows (where (Ri, Ti) ∈RECs,t) . P(DKu,t+1(Ri, Ti) ←1|#(Ri, Ti)) The final state of the user’s domain knowledge (DKu,final) may therefore be different from the initial state (DKu,initial) due to the learning effect produced by the system’s use of jargon expressions. In most studies done previously, the user’s domain knowledge is considered to be static. However in real conversation, we found that the users nearly always learned jargon expressions from the system’s utterances and clarifications. 6 Training The REG module was trained (operated in learning mode) using the above simulations to learn REG policies that select referring expressions based on the user expertise in the domain. As shown in figure 1, the learning agent (REG module) is given a reward at the end of every dialogue. During the training session, the learning agent explores different ways to maximize the reward. In this section, we discuss how to code the learning agent’s goals as reward. We then discuss how the reward function is used to train the learning agent. 6.1 Reward function A reward function generates a numeric reward for the learning agent’s actions. It gives high rewards to the agent when the actions are favourable and low rewards when they are not. In short, the reward function is a representation of the goal of the agent. It translates the agent’s actions into a scalar value that can be maximized by choosing the right action sequences. We designed a reward function for the goal of adapting to each user’s domain knowledge. We present the Adaptation Accuracy score AA that calculates how accurately the agent chose the expressions for each referent r, with respect to the user’s knowledge. Appropriateness of an expression is based on the user’s knowledge of the expression. So, when the user knows the jargon expression for r, the appropriate expression to use is jargon, and if s/he doesn’t know the jargon, an descriptive expression is appropriate. Although the user’s domain knowledge is dynamically changing due to learning, we base appropriateness on the initial state, because our objective is to adapt to the initial state of the user DKu,initial. However, in reality, designers might want their system to account for user’s changing knowledge as well. We calculate accuracy per referent RAr as the ratio of number of appropriate expressions to the total number of instances of the referent in the dialogue. We then calculate the overall mean accuracy over all referents as shown below. RAr = #(appropriate expressions(r)) #(instances(r)) AdaptationAccuracyAA = 1 #(r)ΣrRAr Note that this reward is computed at the end of the dialogue (it is a ‘final’ reward), and is then back-propagated along the action sequence that led to that final state. Thus the reward can be computed for each system REG action, without the system having access to the user’s initial domain knowledge while it is learning a policy. Since the agent starts the conversation with no knowledge about the user, it may try to use more exploratory moves to learn about the user, although they may be inappropriate. However, by measuring accuracy to the initial user state, the agent is encouraged to restrict its exploratory moves and start predicting the user’s domain knowledge as soon as possible. The system should therefore ideally explore less and adapt more to increase accuracy. The above reward function returns 1 when the agent is completely accurate in adapting to the user’s domain knowledge and it returns 0 if the agent’s REC choices were completely inappropriate. Usually during learning, the reward value lies between these two extremes and the agent tries to maximize it to 1. 74 6.2 Learning The REG module was trained in learning mode using the above reward function using the SHARSHA reinforcement learning algorithm (with linear function approximation) (Shapiro and Langley, 2002). This is a hierarchical variant of SARSA, which is an on-policy learning algorithm that updates the current behaviour policy (see (Sutton and Barto, 1998)). The training produced approx. 5000 dialogues. Two types of simulations were used as described above: Data-driven and Handcoded. Both user simulations were calibrated to produce three types of users: Novice, Int2 (intermediate) and Expert, randomly but with equal probability. Novice users knew just one jargon expression, Int2 knew seven, and Expert users knew all thirteen jargon expressions. There was an underlying pattern in these knowledge profiles. For example, Intermediate users were those who knew the commonplace domain entities but not those specific to broadband connection. For instance, they knew “ethernet cable” and “pc ethernet socket” but not “broadband filter” and “broadband cable”. Initially, the REG policy chooses randomly between the referring expression types for each domain entity in the system utterance, irrespective of the user model state. Once the referring expressions are chosen, the system presents the user simulation with both the dialogue act and referring expression choices. The choice of referring expression affects the user’s dialogue behaviour which in turn makes the dialogue manager update the user model. For instance, choosing a jargon expression could evoke a clarification request from the user, which in turn prompts the dialogue manager to update the user model with the new information that the user is ignorant of the particular expression. It should be noted that using a jargon expression is an information seeking move which enables the REG module to estimate the user’s knowledge level. The same process is repeated for every dialogue instruction. At the end of the dialogue, the system is rewarded based on its choices of referring expressions. If the system chooses jargon expressions for novice users or descriptive expressions for expert users, penalties are incurred and if the system chooses REs appropriately, the reward is high. On the one hand, those actions that fetch more reward are reinforced, and on the other hand, the agent tries out new state-action combinations to explore the possibility of greater rewards. Over time, it stops exploring new state-action combinations and exploits those actions that contribute to higher reward. The REG module learns to choose the appropriate referring expressions based on the user model in order to maximize the overall adaptation accuracy. Figure 2 shows how the agent learns using the data-driven (Learned DS) and hand-coded simulations (Learned HS) during training. It can be seen in the figure 2 that towards the end the curve plateaus signifying that learning has converged. Figure 2: Learning curves - Training 7 Evaluation In this section, we present the evaluation metrics used, the baseline policies that were hand-coded for comparison, and the results of evaluation. 7.1 Metrics In addition to the adaptation accuracy mentioned in section 6.1, we also measure other parameters from the conversation in order to show how learned adaptive policies compare with other policies on other dimensions. We calculate the time taken (Time) for the user to complete the dialogue task. This is calculated using a regression model from the corpus based on number of words, turns, and mean user response time. We also measure the (normalised) learning gain (LG) produced by using unknown jargon expressions. This is calculated using the pre and post scores from the user domain knowledge (DKu) as follows. Learning Gain LG = Post−Pre 1−Pre 75 7.2 Baseline REG policies In order to compare the performance of the learned policy with hand-coded REG policies, three simple rule-based policies were built. These were built in the absence of expert domain knowledge and a expert-layperson corpus. • Jargon: Uses jargon for all referents by default. Provides clarifications when requested. • Descriptive: Uses descriptive expressions for all referents by default. • Switching: This policy starts with jargon expressions and continues using them until the user requests for clarification. It then switches to descriptive expressions and continues to use them until the user complains. In short, it switches between the two strategies based on the user’s responses. All the policies exploit the user model in subsequent references after the user’s knowledge of the expression has been set to either yes or no. Therefore, although these policies are simple, they do adapt to a certain extent, and are reasonable baselines for comparison in the absence of expert knowledge for building more sophisticated baselines. 7.3 Results The policies were run under a testing condition (where there is no policy learning or exploration) using a data-driven simulation calibrated to simulate 5 different user types. In addition to the three users - Novice, Expert and Int2, from the training simulations, two other intermediate users (Int1 and Int3) were added to examine how well each policy handles unseen user types. The REG module was operated in evaluation mode to produce around 200 dialogues per policy distributed over the 5 user groups. Overall performance of the different policies in terms of Adaptation Accuracy (AA), Time and Learning Gain (LG) are given in Table 3. Figure 3 shows how each policy performs in terms of accuracy on the 5 types of users. We found that the Learned DS policy (i.e. learned with the data-driven user simulation) is the most accurate (Mean = 79.70, SD = 10.46) in terms of adaptation to each user’s initial state of domain knowledge. Also, it is the only policy that has more or less the same accuracy scores Figure 3: Evaluation - Adaptation Accuracy Policies AA Time T LG Descriptive 46.15 7.44 0 Jargon 74.54 9.15* 0.97* Switching 62.47 7.48 0.30 Learned HS 69.67 7.52 0.33 Learned DS 79.70* 8.08* 0.63* * Significantly different from all others (p < 0.05). Table 3: Evaluation on 5 user types over all different user types (see figure 3). It should also be noted that the it generalised well over user types (Int1 and Int3) which were unseen in training. Learned DS policy outperforms all other policies: Learned HS (Mean = 69.67, SD = 14.18), Switching (Mean = 62.47, SD = 14.18), Jargon (Mean = 74.54, SD = 17.9) and Descriptive (Mean = 46.15, SD = 33.29). The differences between the accuracy (AA) of the Learned DS policy and all other policies were statistically significant with p < 0.05 (using a two-tailed paired ttest). Although Learned HS policy is similar to the Learned DS policy, as shown in the learning curves in figure 2, it does not perform as well when confronted with users types that it did not encounter during training. The Switching policy, on the other hand, quickly switches its strategy (sometimes erroneously) based on the user’s clarification requests but does not adapt appropriately to evidence presented later during the conversation. Sometimes, this policy switches erroneously because of the uncertain user behaviours. In contrast, learned policies continuously adapt to new evidence. The Jargon policy performs better than 76 the Learned HS and Switching policies. This because the system can learn more about the user by using more jargon expressions and then use that knowledge for adaptation for known referents. However, it is not possible for this policy to predict the user’s knowledge of unseen referents. The Learned DS policy performs better than the Jargon policy, because it is able to accurately predict the user’s knowledge of referents unseen in the dialogue so far. The learned policies are a little more timeconsuming than the Switching and Descriptive policies but compared to the Jargon policy, Learned DS takes 1.07 minutes less time. This is because learned policies use a few jargon expressions (giving rise to clarification requests) to learn about the user. On the other hand, the Jargon policy produces more user learning gain because of the use of more jargon expressions. Learned policies compensate on time and learning gain in order to predict and adapt well to the users’ knowledge patterns. This is because the training was optimized for accuracy of adaptation and not for learning gain or time taken. The results show that using our RL framework, REG policies can be learned using data-driven simulations, and that such a policy can predict and adapt to a user’s knowledge pattern more accurately than policies trained using hand-coded rule-based simulations and handcoded baseline policies. 7.4 Discussion The learned policies explore the user’s expertise and predict their knowledge patterns, in order to better choose expressions for referents unseen in the dialogue so far. The system learns to identify the patterns of knowledge in the users with a little exploration (information seeking moves). So, when it is provided with a piece of evidence (e.g. the user knows “broadband filter”), it is able to accurately estimate unknown facts (e.g. the user might know “broadband cable”). Sometimes, its choices are wrong due to incorrect estimation of the user’s expertise (due to stochastic behaviour of the users). In such cases, the incorrect adaptation move can be considered to be an information seeking move. This helps further adaptation using the new evidence. By continuously using this “seek-predict-adapt” approach, the system adapts dynamically to different users. Therefore, with a little information seeking and better prediction, the learned policies are able to better adapt to users with different domain expertise. In addition to adaptation, learned policies learn to identify when to seek information from the user to populate the user model (which is initially set to not sure). It should be noted that the system cannot adapt unless it has some information about the user and therefore needs to decisively seek information by using jargon expressions. If it seeks information all the time, it is not adapting to the user. The learned policies therefore learn to trade-off between information seeking moves and adaptive moves in order to maximize the overall adaptation accuracy score. 8 Conclusion In this study, we have shown that user-adaptive REG policies can be learned from a small corpus of non-adaptive dialogues between a dialogue system and users with different domain knowledge levels. We have shown that such adaptive REG policies learned using a RL framework adapt to unknown users better than simple hand-coded policies built without much input from domain experts or from a corpus of expert-layperson adaptive dialogues. The learned, adaptive REG policies learn to trade off between adaptive moves and information seeking moves automatically to maximize the overall adaptation accuracy. Learned policies start the conversation with information seeking moves, learn a little about the user, and start adapting dynamically as the conversation progresses. We have also shown that a data-driven statistical user simulation produces better policies than a simple hand-coded rule-based simulation, and that the learned policies generalise well to unseen users. In future work, we will evaluate the learned policies with real users to examine how well they adapt, and examine how real users evaluate them (subjectively) in comparison to baselines. Whether the learned policies perform better or as well as a hand-coded policy painstakingly crafted by a domain expert (or learned using supervised methods from an expert-layperson corpus) is an interesting question that needs further exploration. Also, it would also be interesting to make the learned policy account for the user’s learning behaviour and adapt accordingly. 77 Acknowledgements The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 216594 (CLASSiC project www.classic-project.org) and from the EPSRC, project no. EP/G069840/1. References H. Ai and D. Litman. 2007. Knowledge consistent user simulations for dialog systems. In Proceedings of Interspeech 2007, Antwerp, Belgium. T. Akiba and H. Tanaka. 1994. A Bayesian approach for User Modelling in Dialogue Systems. In Proceedings of the 15th conference on Computational Linguistics - Volume 2, Kyoto. A. Bell. 1984. Language style as audience design. Language in Society, 13(2):145–204. A. Cawsey. 1993. User Modelling in Interactive Explanations. User Modeling and User-Adapted Interaction, 3(3):221–247. H. H. Clark and G. L. Murphy. 1982. Audience design in meaning and reference. In J. F. LeNy and W. Kintsch, editors, Language and comprehension. Amsterdam: North-Holland. H. Cuayahuitl. 2009. Hierarchical Reinforcement Learning for Spoken Dialogue Systems. Ph.D. thesis, University of Edinburgh, UK. R. Dale. 1989. Cooking up referring expressions. In Proc. ACL-1989. K. Georgila, J. Henderson, and O. Lemon. 2005. Learning User Simulations for Information State Update Dialogue Systems. In Proc of Eurospeech/Interspeech. F. Hernandez, E. Gaudioso, and J. G. Boticario. 2003. A Multiagent Approach to Obtain Open and Flexible User Models in Adaptive Learning Communities. In User Modeling 2003, volume 2702/2003 of LNCS. Springer, Berlin / Heidelberg. E. A. Issacs and H. H. Clark. 1987. References in conversations between experts and novices. Journal of Experimental Psychology: General, 116:26–37. S. Janarthanam and O. Lemon. 2009a. A Two-tier User Simulation Model for Reinforcement Learning of Adaptive Referring Expression Generation Policies. In Proc. SigDial’09. S. Janarthanam and O. Lemon. 2009b. A Wizard-ofOz environment to study Referring Expression Generation in a Situated Spoken Dialogue Task. In Proc. ENLG’09. S. Janarthanam and O. Lemon. 2009c. Learning Lexical Alignment Policies for Generating Referring Expressions for Spoken Dialogue Systems. In Proc. ENLG’09. O. Lemon. 2010. Learning what to say and how to say it: joint optimization of spoken dialogue management and Natural Language Generation. Computer Speech and Language. (to appear). E. Levin, R. Pieraccini, and W. Eckert. 1997. Learning Dialogue Strategies within the Markov Decision Process Framework. In Proc. of ASRU97. K. McKeown, J. Robin, and M. Tanenblatt. 1993. Tailoring Lexical Choice to the User’s Vocabulary in Multimedia Explanation Generation. In Proc. ACL 1993. C. L. Paris. 1987. The Use of Explicit User Models in Text Generations: Tailoring to a User’s Level of Expertise. Ph.D. thesis, Columbia University. E. Reiter. 1991. Generating Descriptions that Exploit a User’s Domain Knowledge. In R. Dale, C. Mellish, and M. Zock, editors, Current Research in Natural Language Generation, pages 257–285. Academic Press. V. Rieser and O. Lemon. 2009. Natural Language Generation as Planning Under Uncertainty for Spoken Dialogue Systems. In Proc. EACL’09. V. Rieser and O. Lemon. 2010. Optimising information presentation for spoken dialogue systems. In Proc. ACL. (to appear). J. Schatzmann, K. Weilhammer, M. N. Stuttle, and S. J. Young. 2006. A Survey of Statistical User Simulation Techniques for Reinforcement Learning of Dialogue Management Strategies. Knowledge Engineering Review, pages 97–126. J. Schatzmann, B. Thomson, K. Weilhammer, H. Ye, and S. J. Young. 2007. Agenda-based User Simulation for Bootstrapping a POMDP Dialogue System. In Proc of HLT/NAACL 2007. D. Shapiro and P. Langley. 2002. Separating skills from preference: Using learning to program by reward. In Proc. ICML-02. R. Sutton and A. Barto. 1998. Reinforcement Learning. MIT Press. 78
2010
8
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 780–788, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Letter-Phoneme Alignment: An Exploration Sittichai Jiampojamarn and Grzegorz Kondrak Department of Computing Science University of Alberta Edmonton, AB, T6G 2E8, Canada {sj,kondrak}@cs.ualberta.ca Abstract Letter-phoneme alignment is usually generated by a straightforward application of the EM algorithm. We explore several alternative alignment methods that employ phonetics, integer programming, and sets of constraints, and propose a novel approach of refining the EM alignment by aggregation of best alignments. We perform both intrinsic and extrinsic evaluation of the assortment of methods. We show that our proposed EM-Aggregation algorithm leads to the improvement of the state of the art in letter-to-phoneme conversion on several different data sets. 1 Introduction Letter-to-phoneme (L2P) conversion (also called grapheme-to-phoneme conversion) is the task of predicting the pronunciation of a word given its orthographic form by converting a sequence of letters into a sequence of phonemes. The L2P task plays a crucial role in speech synthesis systems (Schroeter et al., 2002), and is an important part of other applications, including spelling correction (Toutanova and Moore, 2001) and speechto-speech machine translation (Engelbrecht and Schultz, 2005). Many data-driven techniques have been proposed for letter-to-phoneme conversion systems, including neural networks (Sejnowski and Rosenberg, 1987), decision trees (Black et al., 1998), pronunciation by analogy (Marchand and Damper, 2000), Hidden Markov Models (Taylor, 2005), and constraint satisfaction (Bosch and Canisius, 2006). Letter-phoneme alignment is an important step in the L2P task. The training data usually consists of pairs of letter and phoneme sequences, which are not aligned. Since there is no explicit information indicating the relationships between individual letter and phonemes, these must be inferred by a letter-phoneme alignment algorithm before a prediction model can be trained. The quality of the alignment affects the accuracy of L2P conversion. Letter-phoneme alignment is closely related to transliteration alignment (Pervouchine et al., 2009), which involves graphemes representing different writing scripts. Letter-phoneme alignment may also be considered as a task in itself; for example, in the alignment of speech transcription with text in spoken corpora. Most previous L2P approaches induce the alignment between letters and phonemes with the expectation maximization (EM) algorithm. In this paper, we propose a number of alternative alignment methods, and compare them to the EMbased algorithms using both intrinsic and extrinsic evaluations. The intrinsic evaluation is conducted by comparing the generated alignments to a manually-constructed gold standard. The extrinsic evaluation uses two different generation techniques to perform letter-to-phoneme conversion on several different data sets. We discuss the advantages and disadvantages of various methods, and show that better alignments tend to improve the accuracy of the L2P systems regardless of the actual technique. In particular, one of our proposed methods advances the state of the art in L2P conversion. We also examine the relationship between alignment entropy and alignment quality. This paper is organized as follows. In Section 2, we enumerate the assumptions that the alignment methods commonly adopt. In Section 3, we review previous work that employs the EM approach. In Sections 4, 5 and 6, we describe alternative approaches based on phonetics, manuallyconstructed constraints, and Integer Programming, respectively. In Section 7, we propose an algorithm to refine the alignments produced by EM. Sections 8 and 9 are devoted to the intrinsic and extrinsic evaluation of various approaches. Section 10 concludes the paper. 780 2 Background We define the letter-phoneme alignment task as the problem of inducing links between units that are related by pronunciation. Each link is an instance of a specific mapping between letters and phonemes. The leftmost example alignment of the word accuse [@kjuz] below includes 1-1, 1-0, 12, and 2-1 links. The letter e is considered to be linked to special null phoneme. Figure 1: Two alignments of accuse. The following constraints on links are assumed by some or all alignment models: • the monotonicity constraint prevents links from crossing each other; • the representation constraint requires each phoneme to be linked to at least one letter, thus precluding nulls on the letter side; • the one-to-one constraint stipulates that each letter and phoneme may participate in at most one link. These constraints increasingly reduce the search space and facilitate the training process for the L2P generation models. We refer to an alignment model that assumes all three constraints as a pure one-to-one (1-1) model. By allowing only 1-1 and 1-0 links, the alignment task is thus greatly simplified. In the simplest case, when the number of letters is equal to the number of phonemes, there is only one possible alignment that satisfies all three constraints. When there are more letters than phonemes, the search is reduced to identifying letters that must be linked to null phonemes (the process referred to as “epsilon scattering” by Black et al. (1998)). In some words, however, one letter clearly represents more than one phoneme; for example, u in Figure 1. Moreover, a pure 1-1 approach cannot handle cases where the number of phonemes exceeds the number of letters. A typical solution to overcome this problems is to introduce so-called double phonemes by merging adjacent phonemes that could be represented as a single letter. For example, a double phoneme U would replace a sequence of the phonemes j and u in Figure 1. This solution requires a manual extension of the set of phonemes present in the data. By convention, we regard the models that include a restricted set of 1-2 mappings as 1-1 models. Advanced L2P approaches, including the joint n-gram models (Bisani and Ney, 2008) and the joint discriminative approach (Jiampojamarn et al., 2007) eliminate the one-to-one constraint entirely, allowing for linking of multiple letters to multiple phonemes. We refer to such models as many-to-many (M-M) models. 3 EM Alignment Early EM-based alignment methods (Daelemans and Bosch, 1997; Black et al., 1998; Damper et al., 2005) were generally pure 1-1 models. The 1-1 alignment problem can be formulated as a dynamic programming problem to find the maximum score of alignment, given a probability table of aligning letter and phoneme as a mapping function. The dynamic programming recursion to find the most likely alignment is the following: Ci,j = max    Ci−1,j−1 + δ(xi, yj) Ci−1,j + δ(xi, ǫ) Ci,j−1 + δ(ǫ, yj) (1) where δ(xi, ǫ) denotes a probability that a letter xi aligns with a null phoneme and δ(ǫ, yj) denotes a probability that a null letter aligns with a phoneme yj. In practice, the latter probability is often set to zero in order to enforce the representation constraint, which facilitates the subsequent phoneme generation process. The probability table δ(xi, yj) can be initialized by a uniform distribution and is iteratively re-computed (M-step) from the most likely alignments found at each iteration over the data set (E-step). The final alignments are constructed after the probability table converges. M2M-aligner (Jiampojamarn et al., 2007) is a many-to-many (M-M) alignment algorithm based on EM that allows for mapping of multiple letters to multiple phonemes. Algorithm 1 describes the E-step of the many-to-many alignment algorithm. γ represents partial counts collected over all possible mappings between substrings of letters and phonemes. The maximum lengths of letter and phoneme substrings are controlled by the 781 Algorithm 1: Many-to-many alignment Input: x, y, maxX, maxY, γ Output: γ α := FORWARD-M2M (x, y, maxX, maxY ) 1 β := BACKWARD-M2M (x, y, maxX, maxY ) 2 T = |x| + 1 , V = |y| + 1 3 if (αT,V = 0) then 4 return 5 for t = 1..T , v = 1..V do 6 for i = 1..maxX st t −i ≥0 do 7 γ(xt t−i+1, ǫ) += αt−i,vδ(xt t−i+1,ǫ)βt,v αT,V 8 for i = 1..maxX st t −i ≥0 do 9 for j = 1..maxY st v −j ≥0 do 10 γ(xt t−i+1, yv v−j+1) += αt−i,v−jδ(xt t−i+1,yv v−j+1)βt,v αT,V 11 maxX and maxY parameters. The forward probability α is estimated by summing the probabilities from left to right, while the backward probability β is estimated in the opposite direction. The FORWARD-M2M procedure is similar to line 3 to 10 of Algorithm 1, except that it uses Equation 2 in line 8 and 3 in line 11. The BACKWARD-M2M procedure is analogous to FORWARD-M2M. αt,v += δ(xt t−i+1, ǫ)αt−i,v (2) αt,v += δ(xt t−i+1, yv v−j+1)αt−i,v−j (3) In M-step, the partial counts are normalized by using a conditional distribution to create the mapping probability table δ. The final many-tomany alignments are created by finding the most likely paths using the Viterbi algorithm based on the learned mapping probability table. The source code of M2M-aligner is publicly available.1 Although the many-to-many approach tends to create relatively large models, it generates more intuitive alignments and leads to improvement in the L2P accuracy (Jiampojamarn et al., 2007). However, since many links involve multiple letters, it also introduces additional complexity in the phoneme prediction phase. One possible solution is to apply a letter segmentation algorithm at test time to cluster letters according to the alignments in the training data. This is problematic because of error propagation inherent in such a process. A better solution is to combine segmentation and decoding using a phrasal decoder (e.g. (Zens and Ney, 2004)). 1http://code.google.com/p/m2m-aligner/ 4 Phonetic alignment The EM-based approaches to L2P alignment treat both letters and phonemes as abstract symbols. A completely different approach to L2P alignment is based on the phonetic similarity between phonemes. The key idea of the approach is to represent each letter by a phoneme that is likely to be represented by the letter. The actual phonemes on the phoneme side and the phonemes representing letters on the letter side can then be aligned on the basis of phonetic similarity between phonemes. The main advantage of the phonetic alignment is that it requires no training data, and so can be readily be applied to languages for which no pronunciation lexicons are available. The task of identifying the phoneme that is most likely to be represented by a given letter may seem complex and highly language-dependent. For example, the letter a can represent no less than 12 different English vowels. In practice, however, absolute precision is not necessary. Intuitively, the letters that had been chosen (often centuries ago) to represent phonemes in any orthographic system tend to be close to the prototype phoneme in the original script. For example, the letter ‘o’ represented a mid-high rounded vowel in Classical Latin and is still generally used to represent similar vowels. The following simple heuristic works well for a number of languages: treat every letter as if it were a symbol in the International Phonetic Alphabet (IPA). The set of symbols employed by the IPA includes the 26 letters of the Latin alphabet, which tend to correspond to the phonemes that they represent in the Latin script. For example, the IPA symbol [ m] denotes a voiced bilabial nasal consonant, which is the phoneme represented by the letter m in most languages that utilize Latin script. ALINE (Kondrak, 2000) performs phonetic alignment of two strings of phonemes. It combines a dynamic programming alignment algorithm with an appropriate scoring scheme for computing phonetic similarity on the basis of multivalued features. The example below shows the alignment of the word sheath to its phonetic transcription [ S i T]. ALINE correctly links the most similar pairs of phonemes (s:S, e:i, t:T).2 2ALINE can also be applied to non-Latin scripts by replacing every grapheme with the IPA symbol that is phonetically closest to it. 782 s h e a t h | | | | | | S i T Since ALINE is designed to align phonemes with phonemes, it does not incorporate the representation constraint. In order to avoid the problem of unaligned phonemes, we apply a postprocessing algorithm, which also handles 1-2 links. The algorithm first attempts to remove 0-1 links by merging them with the adjacent 1-0 links. If this is not possible, the algorithm scans a list of valid 1-2 mappings, attempting to replace a pair of 0-1 and 1-1 links with a single 1-2 link. If this also fails, the entire entry is removed from the training set. Such entries often represent unusual foreignorigin words or outright annotation errors. The number of unaligned entries rarely exceeds 1% of the data. The post-processing algorithm produces an alignment that contains 1-0, 1-1, and 1-2 links. The list of valid 1-2 mappings must be prepared manually. The length of such lists ranges from 1 for Spanish and German (x:[ks]) to 17 for English. This approach is more robust than the doublephoneme technique because the two phonemes are clustered only if they can be linked to the corresponding letter. 5 Constraint-based alignment One of the advantages of the phonetic alignment is its ability to rule out phonetically implausible letter-phoneme links, such as o: p. We are interested in establishing whether a set of allowable letter-phoneme mappings could be derived directly from the data without relying on phonetic features. Black et al. (1998) report that constructing lists of possible phonemes for each letter leads to L2P improvement. They produce the lists in a “semiautomatic”, interactive manner. The lists constrain the alignments performed by the EM algorithm and lead to better-quality alignments. We implement a similar interactive program that incrementally expands the lists of possible phonemes for each letter by refining alignments constrained by those lists. However, instead of employing the EM algorithm, we induce alignments using the standard edit distance algorithm with substitution and deletion assigned the same cost. In cases when there are multiple alternative alignments that have the same edit distance, we randomly choose one of them. Furthermore, we extend this idea also to many-to-many alignments. In addition to lists of phonemes for each letter (11 mappings), we also construct lists of many-tomany mappings, such as ee:i, sch:S, and ew:ju. In total, the English set contains 377 mappings, of which more than half are of the 2-1 type. 6 IP Alignment The process of manually inducing allowable letterphoneme mappings is time-consuming and involves a great deal of language-specific knowledge. The Integer Programming (IP) framework offers a way to induce similar mappings without a human expert in the loop. The IP formulation aims at identifying the smallest set of letter-phoneme mappings that is sufficient to align all instances in the data set. Our IP formulation employs the three constraints enumerated in Section 2, except that the one-to-one constraint is relaxed in order to identify a small set of 1-2 mappings. We specify two types of binary variables that correspond to local alignment links and global letter-phoneme mappings, respectively. We distinguish three types of local variables, X, Y , and Z, which correspond to 1-0, 1-1, and 1-2 links, respectively. In order to minimize the number of global mappings, we set the following objective that includes variables corresponding to 1-1 and 1-2 mappings: minimize : X l,p G(l, p) + X l,p1,p2 G(l, p1p2) (4) We adopt a simplifying assumption that any letter can be linked to a null phoneme, so no global variables corresponding to 1-0 mappings are necessary. In the lexicon entry k, let lik be the letter at position i, and pjk the phoneme at position j. In order to prevent the alignments from utilizing letterphoneme mappings which are not on the global list, we impose the following constraints: ∀i,j,kY (i, j, k) ≤ G(lik, pjk) (5) ∀i,j,kZ(i, j, k) ≤ G(lik, pjkp(j+1)k) (6) For example, the local variable Y (i, j, k) is set if lik is linked to pjk. A corresponding global variable G(lik, pjk) is set if the list of allowed letterphoneme mappings includes the link (lik, pjk). Activating the local variable implies activating the corresponding global variable, but not vice versa. 783 Figure 2: A network of possible alignment links. We create a network of possible alignment links for each lexicon entry k, and assign a binary variable to each link in the network. Figure 2 shows an alignment network for the lexicon entry k: wriggle [r I g @ L]. There are three 1-0 links (level), three 1-1 links (diagonal), and one 1-2 link (steep). The local variables that receive the value of 1 are the following: X(1,0,k), Y(2,1,k), Y(3,2,k), Y(4,3,k), X(5,3,k), Z(6,5,k), and X(7,5,k). The corresponding global variables are: G(r,r), G(i,I), G(g,g), and G(l,@L). We create constraints to ensure that the link variables receiving a value of 1 form a left-to-right path through the alignment network, and that all other link variables receive a value of 0. We accomplish this by requiring the sum of the links entering each node to equal the sum of the links leaving each node. ∀i,j,k X(i, j, k) + Y (i, j, k) + Z(i, j, k) = X(i + 1, j, k) + Y (i + 1, j + 1, k) +Z(i + 1, j + 2, k) We found that inducing the IP model with the full set of variables gives too much freedom to the IP program and leads to inferior results. Instead, we first run the full set of variables on a subset of the training data which includes only the lexicon entries in which the number of phonemes exceeds the number of letters. This generates a small set of plausible 1-2 mappings. In the second pass, we run the model on the full data set, but we allow only the 1-2 links that belong to the initial set of 1-2 mappings induced in the first pass. 6.1 Combining IP with EM The set of allowable letter-phoneme mappings can also be used as an input to the EM alignment algorithm. We call this approach IP-EM. After inducing the minimal set of letter-phoneme mappings, we constrain EM to use only those mappings with the exclusion of all others. We initialize the probability of the minimal set with a uniform distribution, and set it to zero for other mappings. We train the EM model in a similar fashion to the many-tomany alignment algorithm presented in Section 3, except that we limit the letter size to be one letter, and that any letter-phoneme mapping that is not in the minimal set is assigned zero count during the E-step. The final alignments are generated after the parameters converge. 7 Alignment by aggregation During our development experiments, we observed that the technique that combines IP with EM described in the previous section generally leads to alignment quality improvement in comparison with the IP alignment. Nevertheless, because EM is constrained not to introduce any new letter-phoneme mappings, many incorrect alignments are still proposed. We hypothesized that instead of pre-constraining EM, a post-processing of EM’s output may lead to better results. M2M-aligner has the ability to create precise links involving more than one letter, such as ph:f. However, it also tends to create non-intuitive links such as se:z for the word phrase [f r e z], where e is clearly a case of a “silent” letter. We propose an alternative EM-based alignment method that instead utilizes a list of alternative one-to-many alignments created with M2M-aligner and aggregates 1-M links into M-M links in cases when there is a disagreement between alignments within the list. For example, if the list contains the two alignments shown in Figure 3, the algorithm creates a single many-to-many alignment by merging the first pair of 1-1 and 1-0 links into a single ph:f link. However, the two rightmost links are not merged because there is no disagreement between the two initial alignments. Therefore, the resulting alignment reinforces the ph:f mapping, but avoids the questionable se:z link. p h r a s e p h r a s e | | | | | | | | | | | | f - r e z - f r e z Figure 3: Two alignments of phrase. In order to generate the list of best alignments, we use Algorithm 2, which is an adaptation of the standard Viterbi algorithm. Each cell Qt,v contains a list of n-best scores that correspond to al784 Algorithm 2: Extracting n-best alignments Input: x, y, δ Output: QT,V T = |x| + 1 , V = |y| + 1 1 for t = 1..T do 2 Qt,v = ∅ 3 for v = 1..V do 4 for q ∈Qt−1,v do 5 append q · δ(xt, ǫ) to Qt,v 6 for j = 1..maxY st v −j ≥0 do 7 for q ∈Qt−1,v−j do 8 append q · δ(xt, yv v−j+1) to Qt,v 9 sort Qt,v 10 Qt,v = Qt,v[1 : n] 11 ternative alignments during the forward pass. In line 9, we consider all possible 1-M links between letter xt and phoneme substring yv v−j+1. At the end of the main loop, we keep at most n best alignments in each Qt,v list. Algorithm 2 yields n-best alignments in the QT,V list. However, in order to further restrict the set of high-quality alignments, we also discard the alignments with scores below threshold R with respect to the best alignment score. Based on the experiments with the development set, we set R = 0.8 and n = 10. 8 Intrinsic evaluation For the intrinsic evaluation, we compared the generated alignments to gold standard alignments extracted from the the core vocabulary of the Combilex data set (Richmond et al., 2009). Combilex is a high quality pronunciation lexicon with explicit expert manual alignments. We used a subset of the lexicon composed of the core vocabulary containing 18,145 word-phoneme pairs. The alignments contain 550 mappings, which include complex 4-1 and 2-3 types. Each alignment approach creates alignments from unaligned word-phoneme pairs in an unsupervised fashion. We distinguish between the 1-1 and M-M approaches. We report the alignment quality in terms of precision, recall and Fscore. Since the gold standard includes many links that involve multiple letters, the theoretical upper bound for recall achieved by a one-to-one approach is 90.02%. However, it is possible to obtain the perfect precision because we count as correct all 1-1 links that are consistent with the M-M links in the gold standard. The F-score corresponding to perfect precision and the upper-bound recall is 94.75%. Alignment entropy is a measure of alignment quality proposed by Pervouchine et al. (2009) in the context of transliteration. The entropy indicates the uncertainty of mapping between letter l and phoneme p resulting from the alignment: We compute the alignment entropy for each of the methods using the following formula: H = − X l,p P(l, p) log P(l|p) (7) Table 1 includes the results of the intrinsic evaluation. (the two rightmost columns are discussed in Section 9). The baseline BaseEM is an implementation of the one-to-one alignment method of (Black et al., 1998) without the allowable list. ALINE is the phonetic method described in Section 4. SeedMap is the hand-seeded method described in Section 5. M-M-EM is the M2Maligner approach of Jiampojamarn et al. (2007). 1-M-EM is equivalent to M-M-EM but with the restriction that each link contains exactly one letter. IP-align is the alignment generated by the IP formulation from Section 6. IP-EM is the method that combines IP with EM described in Section 6.1. EM-Aggr is our final many-to-many alignment method described in Section 7. Oracle corresponds to the gold-standard alignments from Combilex. Overall, the M-M models obtain lower precision but higher recall and F-score than 1-1 models, which is to be expected as the gold standard is defined in terms of M-M links. ALINE produces the most accurate alignments among the 1-1 methods, with the precision and recall values that are very close to the theoretical upper bounds. Its precision is particularly impressive: on average, only one link in a thousand is not consistent with the gold standard. In terms of word accuracy, 98.97% words have no incorrect links. Out of 18,145 words, only 112 words contain incorrect links, and further 75 words could not be aligned. The ranking of the 1-1 methods is quite clear: ALINE followed by IP-EM, 1-M-EM, IP-align, and BaseEM. Among the M-M methods, EM-Aggr has slightly better precision than M-M-EM, but its recall is much worse. This is probably caused by the aggregation strategy causing EM-Aggr to “lose” a significant number of correct links. In general, the entropy measure does not mirror the quality of the alignment. 785 Aligner Precision Recall F1 score Entropy L2P 1-1 L2P M-M BaseEM 96.54 82.84 89.17 0.794 50.00 65.38 ALINE 99.90 89.54 94.44 0.672 54.85 68.74 1-M-EM 99.04 89.15 93.84 0.636 53.91 69.13 IP-align 98.30 88.49 93.14 0.706 52.66 68.25 IP-EM 99.31 89.40 94.09 0.651 53.86 68.91 M-M-EM 96.54 97.13 96.83 0.655 — 68.52 EM-Aggr 96.67 93.39 95.00 0.635 — 69.35 SeedMap 97.88 97.44 97.66 0.634 — 68.69 Oracle 100.0 100.0 100.0 0.640 — 69.35 Table 1: Alignment quality, entropy, and L2P conversion accuracy on the Combilex data set. Aligner Celex-En CMUDict NETtalk OALD Brulex BaseEM 75.35 60.03 54.80 67.23 81.33 ALINE 81.50 66.46 54.90 72.12 89.37 1-M-EM 80.12 66.66 55.00 71.11 88.97 IP-align 78.88 62.34 53.10 70.46 83.72 IP-EM 80.95 67.19 54.70 71.24 87.81 Table 2: L2P word accuracy using the TiMBL-based generation system. 9 Extrinsic evaluation In order to investigate the relationship between the alignment quality and L2P performance, we feed the alignments to two different L2P systems. The first one is a classification-based learning system employing TiMBL (Daelemans et al., 2009), which can utilize either 1-1 or 1-M alignments. The second system is the state-of-the-art online discriminative training for letter-to-phoneme conversion (Jiampojamarn et al., 2008), which accepts both 1-1 and M-M types of alignment. Jiampojamarn et al. (2008) show that the online discriminative training system outperforms a number of competitive approaches, including joint ngrams (Demberg et al., 2007), constraint satisfaction inference (Bosch and Canisius, 2006), pronunciation by analogy (Marchand and Damper, 2006), and decision trees (Black et al., 1998). The decoder module uses standard Viterbi for the 1-1 case, and a phrasal decoder (Zens and Ney, 2004) for the M-M case. We report the L2P performance in terms of word accuracy, which rewards only the completely correct output phoneme sequences. The data set is randomly split into 90% for training and 10% for testing. For all experiments, we hold out 5% of our training data to determine when to stop the online training process. Table 1 includes the results on the Combilex data set. The two rightmost columns correspond to our two test L2P systems. We observe that although better alignment quality does not always translate into better L2P accuracy, there is nevertheless a strong correlation between the two, especially for the weaker phoneme generation system. Interestingly, EM-Aggr matches the L2P accuracy obtained with the gold standard alignments. However, there is no reason to claim that the gold standard alignments are optimal for the L2P generation task, so that result should not be considered as an upper bound. Finally, we note that alignment entropy seems to match the L2P accuracy better than it matches alignment quality. Tables 2 and 3 show the L2P results on several evaluation sets: English Celex, CMUDict, NETTalk, OALD, and French Brulex. The training sizes range from 19K to 106K words. We follow exactly the same data splits as in Bisani and Ney (2008). The TiMBL L2P generation method (Table 2) is applicable only to the 1-1 alignment models. ALINE produces the highest accuracy on four out of six datasets (including Combilex). The performance of IP-EM is comparable to 1-M-EM, but not consistently better. IP-align does not seem to measure up to the other algorithms. The discriminative approach (Table 3) is flexible enough to utilize all kinds of alignments. However, the M-M models perform clearly better than 1-1 models. The only exception is NetTalk, which 786 Aligner Celex-En CMUDict NETTalk OALD Brulex BaseEM 85.66 71.49 68.60 80.76 88.41 ALINE 87.96 75.05 69.52 81.57 94.56 1-M-EM 88.08 75.11 70.78 81.78 94.54 IP-EM 88.00 75.09 70.10 81.76 94.96 M-M-EM 88.54 75.41 70.18 82.43 95.03 EM-Aggr 89.11 75.52 71.10 83.32 95.07 joint n-gram 88.58 75.47 69.00 82.51 93.75 Table 3: L2P word accuracy using the online discriminative system. Figure 4: L2P word accuracy vs. alignment entropy. can be attributed to the fact that NetTalk already includes double-phonemes in its original formulation. In general, the 1-M-EM method achieves the best results among the 1-1 alignment methods, Overall, EM-Aggr achieves the best word accuracy in comparison to other alignment methods including the joint n-gram results, which are taken directly from the original paper of Bisani and Ney (2008). Except the Brulex and CMUDict data sets, the differences between EM-Aggr and M-MEM are statistically significant according to McNemar’s test at 90% confidence level. Figure 4 contains a plot of alignment entropy values vs. L2P word accuracy. Each point represent an application of a particular alignment method to a different data sets. It appears that there is only weak correlation between alignment entropy and L2P accuracy. So far, we have been unable to find either direct or indirect evidence that alignment entropy is a reliable measure of letterphoneme alignment quality. 10 Conclusion We investigated several new methods for generating letter-phoneme alignments. The phonetic alignment is recommended for languages with little or no training data. The constraint-based approach achieves excellent accuracy at the cost of manual construction of seed mappings. The IP alignment requires no linguistic expertise and guarantees a minimal set of letter-phoneme mappings. The alignment by aggregation advances the state-of-the-art results in L2P conversion. We thoroughly evaluated the resulting alignments on several data sets by using them as input to two different L2P generation systems. Finally, we employed an independently constructed lexicon to demonstrate the close relationship between alignment quality and L2P conversion accuracy. One open question that we would like to investigate in the future is whether L2P conversion accuracy could be improved by treating letter-phoneme alignment links as latent variables, instead of committing to a single best alignment. Acknowledgments This research was supported by the Alberta Ingenuity, Informatics Circle of Research Excellence (iCORE), and Natural Science of Engineering Research Council of Canada (NSERC). References Maximilian Bisani and Hermann Ney. 2008. Jointsequence models for grapheme-to-phoneme conversion. Speech Communication, 50(5):434–451. Alan W. Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules. In The Third ESCA Workshop in Speech Synthesis, pages 77–80. Antal Van Den Bosch and Sander Canisius. 2006. Improved morpho-phonological sequence processing with constraint satisfaction inference. Proceedings of the Eighth Meeting of the ACL Special Interest Group in Computational Phonology, SIGPHON ’06, pages 41–49. 787 Walter Daelemans and Antal Van Den Bosch. 1997. Language-independent data-oriented grapheme-tophoneme conversion. In Progress in Speech Synthesis, pages 77–89. New York, USA. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2009. TiMBL: Tilburg Memory Based Learner, version 6.2, Reference Guide. ILK Research Group Technical Report Series no. 09-01. Robert I. Damper, Yannick Marchand, John DS. Marsters, and Alexander I. Bazin. 2005. Aligning text and phonemes for speech technology applications using an EM-like algorithm. International Journal of Speech Technology, 8(2):147–160. Vera Demberg, Helmut Schmid, and Gregor M¨ohler. 2007. Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 96–103, Prague, Czech Republic. Herman Engelbrecht and Tanja Schultz. 2005. Rapid development of an afrikaans-english speech-tospeech translator. In International Workshop of Spoken Language Translation (IWSLT), Pittsburgh, PA, USA. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 372– 379, Rochester, New York, USA. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In Proceedings of ACL-08: HLT, pages 905–913, Columbus, Ohio, June. Association for Computational Linguistics. Grzegorz Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proceedings of NAACL 2000: 1st Meeting of the North American Chapter of the Association for Computational Linguistics, pages 288–295. Yannick Marchand and Robert I. Damper. 2000. A multistrategy approach to improving pronunciation by analogy. Computational Linguistics, 26(2):195– 219. Yannick Marchand and Robert I. Damper. 2006. Can syllabification improve pronunciation by analogy of English? Natural Language Engineering, 13(1):1– 24. Vladimir Pervouchine, Haizhou Li, and Bo Lin. 2009. Transliteration alignment. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 136–144, Suntec, Singapore, August. Association for Computational Linguistics. Korin Richmond, Robert A. J. Clark, and Sue Fitt. 2009. Robust LTS rules with the Combilex speech technology lexicon. In Proceedings od Interspeech, pages 1295–1298. Juergen Schroeter, Alistair Conkie, Ann Syrdal, Mark Beutnagel, Matthias Jilka, Volker Strom, Yeon-Jun Kim, Hong-Goo Kang, and David Kapilow. 2002. A perspective on the next challenges for TTS research. In IEEE 2002 Workshop on Speech Synthesis. Terrence J. Sejnowski and Charles R. Rosenberg. 1987. Parallel networks that learn to pronounce English text. In Complex Systems, pages 1:145–168. Paul Taylor. 2005. Hidden Markov Models for grapheme to phoneme conversion. In Proceedings of the 9th European Conference on Speech Communication and Technology. Kristina Toutanova and Robert C. Moore. 2001. Pronunciation modeling for improved spelling correction. In ACL ’02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 144–151, Morristown, NJ, USA. Richard Zens and Hermann Ney. 2004. Improvements in phrase-based statistical machine translation. In HLT-NAACL 2004: Main Proceedings, pages 257– 264, Boston, Massachusetts, USA. 788
2010
80
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 789–797, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Using Document Level Cross-Event Inference to Improve Event Extraction Shasha Liao New York University 715 Broadway, 7th floor New York, NY 10003 USA [email protected] Ralph Grishman New York University 715 Broadway, 7th floor New York, NY 10003 USA [email protected] Abstract Event extraction is a particularly challenging type of information extraction (IE). Most current event extraction systems rely on local information at the phrase or sentence level. However, this local context may be insufficient to resolve ambiguities in identifying particular types of events; information from a wider scope can serve to resolve some of these ambiguities. In this paper, we use document level information to improve the performance of ACE event extraction. In contrast to previous work, we do not limit ourselves to information about events of the same type, but rather use information about other types of events to make predictions or resolve ambiguities regarding a given event. We learn such relationships from the training corpus and use them to help predict the occurrence of events and event arguments in a text. Experiments show that we can get 9.0% (absolute) gain in trigger (event) classification, and more than 8% gain for argument (role) classification in ACE event extraction. 1 Introduction The goal of event extraction is to identify instances of a class of events in text. The ACE 2005 event extraction task involved a set of 33 generic event types and subtypes appearing frequently in the news. In addition to identifying the event itself, it also identifies all of the participants and attributes of each event; these are the entities that are involved in that event. Identifying an event and its participants and attributes is quite difficult because a larger field of view is often needed to understand how facts tie together. Sometimes it is difficult even for people to classify events from isolated sentences. From the sentence: (1) He left the company. it is hard to tell whether it is a Transport event in ACE, which means that he left the place; or an End-Position event, which means that he retired from the company. However, if we read the whole document, a clue like “he planned to go shopping before he went home” would give us confidence to tag it as a Transport event, while a clue like “They held a party for his retirement” would lead us to tag it as an End-Position event. Such clues are evidence from the same event type. However, sometimes another event type is also a good predictor. For example, if we find a Start-Position event like “he was named president three years ago”, we are also confident to tag (1) as End-Position event. Event argument identification also shares this benefit. Consider the following two sentences: (2) A bomb exploded in Bagdad; seven people died while 11 were injured. (3) A bomb exploded in Bagdad; the suspect got caught when he tried to escape. If we only consider the local context of the trigger “exploded”, it is hard to determine that “seven people” is a likely Target of the Attack event in (2), or that the “suspect” is the Attacker of the Attack event, because the structures of (2) and (3) are quite similar. The only clue is from the semantic inference that a person who died may well have been a Target of the Attack event, and the person arrested is probably the Attacker of the Attack event. These may be seen as 789 examples of a broader textual inference problem, and in general such knowledge is quite difficult to acquire and apply. However, in the present case we can take advantage of event extraction to learn these rules in a simpler fashion, which we present below. Most current event extraction systems are based on phrase or sentence level extraction. Several recent studies use high-level information to aid local event extraction systems. For example, Finkel et al. (2005), Maslennikov and Chua (2007), Ji and Grishman (2008), and Patwardhan and Riloff (2007, 2009) tried to use discourse, document, or cross-document information to improve information extraction. However, most of this research focuses on single event extraction, or focuses on high-level information within a single event type, and does not consider information acquired from other event types. We extend these approaches by introducing cross-event information to enhance the performance of multi-event-type extraction systems. Cross-event information is quite useful: first, some events co-occur frequently, while other events do not. For example, Attack, Die, and Injure events very frequently occur together, while Attack and Marry are less likely to co-occur. Also, typical relations among the arguments of different types of events can be helpful in predicting information to be extracted. For example, the Victim of a Die event is probably the Target of the Attack event. As a result, we extend the observation that “a document containing a certain event is likely to contain more events of the same type”, and base our approach on the idea that “a document containing a certain type of event is likely to contain instances of related events”. In this paper, automatically extracted within-event and cross-event information is used to aid traditional sentence level event extraction. 2 Task Description Automatic Content Extraction (ACE) defines an event as a specific occurrence involving participants1, and it annotates 8 types and 33 subtypes of events. We first present some ACE terminology to understand this task more easily:  Entity: an object or a set of objects in one of the semantic categories of interest, referred to in the document by one or more 1 See http://projects.ldc.upenn.edu/ace/docs/English-Events- Guidelines_v5.4.3.pdf for a description of this task. (coreferential) entity mentions.  Entity mention: a reference to an entity (typically, a noun phrase)  Timex: a time expression including date, time of the day, season, year, etc.  Event mention: a phrase or sentence within which an event is described, including trigger and arguments. An event mention must have one and only one trigger, and can have an arbitrary number of arguments.  Event trigger: the main word that most clearly expresses an event occurrence. An ACE event trigger is generally a verb or a noun.  Event mention arguments (roles)2: the entity mentions that are involved in an event mention, and their relation to the event. For example, event Attack might include participants like Attacker, Target, or attributes like Time_within and Place. Arguments will be taggable only when they occur within the scope of the corresponding event, typically the same sentence. Consider the sentence: (4) Three murders occurred in France today, including the senseless slaying of Bob Cole and the assassination of Joe Westbrook. Bob was on his way home when he was attacked… Event extraction depends on previous phases like name identification, entity mention classification and coreference. Table 1 shows the results of this preprocessing. Note that entity mentions that share the same EntityID are coreferential and treated as the same object. Entity(Time x) mention head word Entity ID Entity type 0001-1-1 France 0001-1 GPE 0001-T1-1 Today 0001-T1 Timex 0001-2-1 Bob Cole 0001-2 PER 0001-3-1 Joe Westbrook 0001-3 PER 0001-2-2 Bob 0001-2 PER 0001-2-3 He 0001-2 PER Table 1. An example of entities and entity mentions and their types 2 Note that we do not deal with event mention coreference in this paper, so each event mention is treated as a separate event. 790 There are three Die events, which share the same Place and Time roles, with different Victim roles. And there is one Attack event sharing the same Place and Time roles with the Die events. Role Event type Trigger Place Victim Time Die murder 0001-1-1 0001-T1-1 Die death 0001-1-1 0001-2-1 0001-T1-1 Die killing 0001-1-1 0001-3-1 0001-T1-1 Role Event type Trigger Place Target Time Attack attack 0001-1-1 0001-2-3 0001-T1-1 Table2. An example of event trigger and roles In this paper, we treat the 33 event subtypes as separate event types and do not consider the hierarchical structure among them. 3 Related Work Almost all the current ACE event extraction systems focus on processing one sentence at a time (Grishman et al., 2005; Ahn, 2006; Hardy et al. 2006). However, there have been several studies using high-level information from a wider scope: Maslennikov and Chua (2007) use discourse trees and local syntactic dependencies in a pattern-based framework to incorporate wider context to refine the performance of relation extraction. They claimed that discourse information could filter noisy dependency paths as well as increasing the reliability of dependency path extraction. Finkel et al. (2005) used Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. They used this technique to augment an information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. Ji and Grishman (2008) were inspired from the hypothesis of “One Sense Per Discourse” (Yarowsky, 1995); they extended the scope from a single document to a cluster of topic-related documents and employed a rule-based approach to propagate consistent trigger classification and event arguments across sentences and documents. Combining global evidence from related documents with local decisions, they obtained an appreciable improvement in both event and event argument identification. Patwardhan and Riloff (2009) proposed an event extraction model which consists of two components: a model for sentential event recognition, which offers a probabilistic assessment of whether a sentence is discussing a domain-relevant event; and a model for recognizing plausible role fillers, which identifies phrases as role fillers based upon the assumption that the surrounding context is discussing a relevant event. This unified probabilistic model allows the two components to jointly make decisions based upon both the local evidence surrounding each phrase and the “peripheral vision”. Gupta and Ji (2009) used cross-event information within ACE extraction, but only for recovering implicit time information for events. 4 Motivation We analyzed the sentence-level baseline event extraction, and found that many events are missing or spuriously tagged because the local information is not sufficient to make a confident decision. In some local contexts, it is easy to identify an event; in others, it is hard to do so. Thus, if we first tag the easier cases, and use such knowledge to help tag the harder cases, we might get better overall performance. In addition, global information can make the event tagging more consistent at the document level. Here are some examples. For trigger classification: The pro-reform director of Iran's biggest-selling daily newspaper and official organ of Tehran's municipality has stepped down following the appointment of a conservative …it was founded a decade ago … but a conservative city council was elected in the February 28 municipal polls … Mahmud Ahmadi-Nejad, reported to be a hardliner among conservatives, was appointed mayor on Saturday …Founded by former mayor Gholamhossein Karbaschi, Hamshahri… 791 Figure 1. Conditional probability of the other 32 event types in documents where a Die event appears Figure 2. Conditional probability of the other 32 event types in documents where a Start-Org event appears The sentence level baseline system finds event triggers like “founded” (trigger of Start-Org), “elected” (trigger of Elect), and “appointment” (trigger of Start-Position), which are easier to identify because these triggers have more specific meanings. However, it does not recognize the trigger “stepped” (trigger of End-Position) because in the training corpus “stepped” does not always appear as an End-Position event, and local context does not provide enough information for the MaxEnt model to tag it as a trigger. However, in the document that contains related events like Start-Position, “stepped” is more likely to be tagged as an End-Position event. For argument classification, the cross-event evidence from the document level is also useful: British officials say they believe Hassan was a blindfolded woman seen being shot in the head by a hooded militant on a video obtained but not aired by the Arab television station Al-Jazeera. She would be the first foreign woman to die in the wave of kidnappings in Iraq…she's been killed by (men in pajamas), turn Iraq upside down and find them. From this document, the local information is not enough for our system to tag “Hassan” as the target of an Attack event, because it is quite far from the trigger “shot” and the syntax is somewhat complex. However, it is easy to tag “she” as the Victim of a Die event, because it is the object of the trigger “killed”. As “she” and “Hassan” are co-referred, we can use this easily tagged argument to help identify the harder one. 4.1 Trigger Consistency and Distribution Within a document, there is a strong trigger consistency: if one instance of a word triggers an event, other instances of the same word will trigger events of the same type3. There are also strong correlations among event types in a document. To see this we calculated the conditional probability (in the ACE corpus) of a certain event type appearing in a document when another event type appears in the same document. 3 This is true over 99.4% of the time in the ACE corpus. 792 Figure 3. Conditional probability of all possible roles in other event types for entities that are the Targets of Attack events (roles with conditional probability below 0.002 are omitted) Event Cond. Prob. Attack 0.714 Transport 0.507 Injure 0.306 Meet 0.164 Arrest-Jail 0.153 Sentence 0.126 Phone-Write 0.111 End-Position 0.116 Trial-Hearing 0.105 Convict 0.100 Table 3. Events co-occurring with die events with conditional probability > 10% As there are 33 subtypes, there are potentially 33⋅32/2=528 event pairs. However, only a few of these appear with substantial frequency. For example, there are only 10 other event types that occur in more than 10% of the documents in which a die event appears. From Table 3, we can see that Attack, Transport and Injure events appear frequently with Die. We call these the related event types for Die (see Figure 1 and Table 3). The same thing happens for Start-Org events, although its distribution is quite different from Die events. For Start-Org, there are more related events like End-Org, Start-Position, and End-Position (Figure 2). But there are 12 other event types which never appear in documents containing Start-Org events. From the above, we can see that the distributions of different event types are quite different, and these distributions might be good predictors for event extraction. 4.2 Role Consistency and Distribution Normally one entity, if it appears as an argument of multiple events of the same type in a single document, is assigned the same role each time.4 There is also a strong relationship between the roles when an entity participates in different types of events in a single document. For example, we checked all the entities in the ACE corpus that appear as the Target role for an Attack event, and recorded the roles they were assigned for other event types. Only 31 other event-role combinations appeared in total (out of 237 possible with ACE annotation), and 3 clearly dominated. In Figure 3, we can see that the most likely roles for the Target role of the Attack event are the Victim role of the Die or Injure event and the Artifact role of the Transport event. The last of these corresponds to troop movements prior to or in response to attacks. 5 Cross-event Approach In this section we present our approach to using document-level event and role information to improve sentence-level ACE event extraction. Our event extraction system is a two-pass system where the sentence-level system is first applied to make decisions based on local information. Then the confident local information is collected and gives an approximate view of the content of the document. The document level system is finally applied to deal with the cases which the local 4 This is true over 97% of the time in the ACE corpus. 793 system can’t handle, and achieve document consistency. 5.1 Sentence-level Baseline System We use a state-of-the-art English IE system as our baseline (Grishman et al. 2005). This system extracts events independently for each sentence, because the definition of event mention argument constrains them to appear in the same sentence. The system combines pattern matching with statistical models. In the training process, for every event mention in the ACE training corpus, patterns are constructed based on the sequences of constituent heads separating the trigger and arguments. A set of Maximum Entropy based classifiers are also trained:  Argument Classifier: to distinguish arguments of a potential trigger from non-arguments;  Role Classifier: to classify arguments by argument role.  Reportable-Event Classifier (Trigger Classifier): Given a potential trigger, an event type, and a set of arguments, to determine whether there is a reportable event mention. In the test procedure, each document is scanned for instances of triggers from the training corpus. When an instance is found, the system tries to match the environment of the trigger against the set of patterns associated with that trigger. This pattern-matching process, if successful, will assign some of the mentions in the sentence as arguments of a potential event mention. The argument classifier is applied to the remaining mentions in the sentence; for any argument passing that classifier, the role classifier is used to assign a role to it. Finally, once all arguments have been assigned, the reportable-event classifier is applied to the potential event mention; if the result is successful, this event mention is reported.5 5.2 Document-level Confident Information Collector To use document-level information, we need to collect information based on the sentence-level baseline system. As it is a statistically-based model, it can provide a value that indicates how likely it is that this word is a trigger, or that the mention is an argument and has a particular role. 5 If the event arguments include some assigned by the pattern-matching process, the event mention is accepted unconditionally, bypassing the reportable- event classifier. We want to see if this value can be trusted as a confidence score. To this end, we set different thresholds from 0.1 to 1.0 in the baseline system output, and only evaluate triggers, arguments or roles whose confidence score is above the threshold. Results show that as the threshold is raised, the precision generally increases and the recall falls. This indicates that the value is consistent and a useful indicator of event/argument confidence (see Figure 4).6 Figure 4. The performance of different confidence thresholds in the baseline system on the development set To acquire confident document-level information, we only collect triggers and roles tagged with high confidence. Thus, a trigger threshold t_threshold and role threshold r_threshold are set to remove low confidence triggers and arguments. Finally, a table with confident event information is built. For every event, we collect its trigger and event type; for every argument, we use co-reference information and record every entity and its role(s) in events of a certain type. To achieve document consistency, in cases where the baseline system assigns a word to triggers for more than one event type, if the margin between the probability of the highest and the second highest scores is above a threshold m_threshold, we only keep the event type with highest score and record this in the confident-event table. Otherwise (if the margin is smaller) the event type assignments will be recorded in a separate conflict table. The same strategy is applied to argument/role conflicts. We will not use information in the conflict table to infer the event type or argument/roles for other event mentions, because we cannot 6 The trigger classification curve doesn’t follow the expected recall/precision trade-off, particularly at high thresholds. This is due, at least in part, to the fact that some events bypass the reportable-event classifier (trigger classifier) (see footnote 5). At high thresholds this is true of the bulk of the events. 794 confidently resolve the conflict. However, the event type and argument/role assignments in the conflict table will be included in the final output because the local confidence for the individual assignments is high. As a result, we finally build two document-level confident-event tables: the event type table and the argument (role) table. A conflict table is also built but not used for further predictions (see Table 4). Confident table Event type table Trigger Event Type Met Meet Exploded Attack Went Transport Injured Injure Attacked Attack Died Die Argument role table Entity ID Event type Role 0004-T2 Die Time Within 0004-6 Die Place 0004-4 Die Victim 0004-7 Die Agent 0004-11 Attack Target 0004-T3 Attack Time Within 0004-12 Attack Place 0004-10 Attack Attacker Conflict table Entity ID Event type Roles 0004-8 Attack Victim, Agent Table 4. Example of document-level confident-event table (event type and argument role entries) and conflict table 5.3 Statistical Cross-event Classifiers To take advantage of cross-event relationships, we train two additional MaxEnt classifiers – a document-level trigger and argument classifier – and then use these classifiers to infer additional events and event arguments. In analyzing new text, the trigger classifier is first applied to tag an event, and then the argument (role) classifier is applied to tag possible arguments and roles of this event. 5.3.1 Document Level Trigger Classifier From the document-level confident-event table, we have a rough view of what kinds of events are reported in this document. The trigger classifier predicts whether a word is the trigger of an event, and if so of what type, given the information (from the confident-event table) about other types of events in the document. Each feature of this classifier is the conjunction of: • The base form of the word • An event type • A binary indicator of whether this event type is present elsewhere in the document (There are 33 event types and so 33 features for each word). 5.3.2 Document Level Argument (Role) Classifier The role classifier predicts whether a given mention is an argument of a given event and, if so, what role it takes on, again using information from the confident-event table about other events. As noted above, we assume that the role of an entity is unique for a specific event type, although an entity can take on different roles for different event types. Thus, if there is a conflict in the document level table, the collector will only keep the one with highest confidence, or discard them all. As a result, every entity is assigned a unique role with respect to a particular event type, or null if it is not an argument of a certain event type. Each feature is the conjunction of: • The event type we are trying to assign an argument/role to. • One of the 32 other event types • The role of this entity with respect to the other event type elsewhere in the document, or null if this entity is not an argument of that type of event 5.4 Document Level Event Tagging At this point, the low-confidence triggers and arguments (roles) have been removed and the document-level confident-event table has been built; the new classifiers are now used to augment the confident tags that were previously assigned based on local information. For trigger tagging, we only apply the classifier to the words that do not have a confident local labeling; if the trigger is already in the document level confident-event table, we will not re-tag it. 795 performance system/human Trigger classification Argument classification Role classification P R F P R F P R F Sentence-level baseline system 67.56 53.54 59.74 46.45 37.15 41.29 41.02 32.81 36.46 Within-event-type rules 63.03 59.90 61.43 48.59 46.16 47.35 43.33 41.16 42.21 Cross-event statistical model 68.71 68.87 68.79 50.85 49.72 50.28 45.06 44.05 44.55 Human annotation1 59.2 59.4 59.3 60.0 69.4 64.4 51.6 59.5 55.3 Human annotation2 69.2 75.0 72.0 62.7 85.4 72.3 54.1 73.7 62.4 Table 5. Overall performance on blind test data The argument/role tagger is then applied to all events—those in the confident-event table and those newly tagged. For argument tagging, we only consider the entity mentions in the same sentence as the trigger word, because by the ACE event guidelines, the arguments of an event should appear within the same sentence as the trigger. For a given event, we re-tag the entity mentions that have not already been assigned as arguments of that event by the confident-event or conflict table. 6 Experiments We followed Ji and Grishman (2008)’s evaluation and randomly select 10 newswire texts from the ACE 2005 training corpora as our development set, which is used for parameter tuning, and then conduct a blind test on a separate set of 40 ACE 2005 newswire texts. We use the rest of the ACE training corpus (549 documents) as training data for both the sentence-level baseline event tagger and document-level event tagger. To compare with previous work on within-event propagation, we reproduced Ji and Grishman (2008)’s approach for cross-sentence, within-event-type inference (see “within-event-type rules” in Table 5). We applied their within-document inference rules using the cross-sentence confident-event information. These rules basically serve to adjust trigger and argument classification to achieve document-wide consistency. This process treats each event type separately: information about events of a given type is used to infer information about other events of the same type. We report the overall Precision (P), Recall (R), and F-Measure (F) on blind test data. In addition, we also report the performance of two human annotators on 28 ACE newswire texts (a subset of the blind test set).7 From the results presented in Table 5, we can see that using the document level cross-event information, we can improve the F score for trigger classification by 9.0%, argument classification by 9.0%, and role classification by 8.1%. Recall improved sharply, demonstrating that cross-event information could recover information that is difficult for the sentence-level baseline to extract; precision also improved over the baseline, although not as markedly. Compared to the within-event-type rules, the cross-event model yields much more improvement for trigger classification: rule-based propagation gains 1.7% improvement while the cross-event model achieves a further 7.3% improvement. For argument and role classification, the cross-event model also gains 3% and 2.3% above that obtained by the rule-based propagation process. 7 Conclusion and Future Work We propose a document-level statistical model for event trigger and argument (role) classification to achieve document level within-event and cross-event consistency. Experiments show that document-level information can improve the performance of a sentence-level baseline event extraction system. The model presented here is a simple two-stage recognition process; nonetheless, it has proven sufficient to yield substantial improvements in event recognition and event 7 The final key was produced by review and adjudication of the two annotations by a third annotator, which indicates that the event extraction task is quite difficult and human agreement is not very high. 796 argument recognition. Richer models, such as those based on joint inference, may produce even greater gains. In addition, extending the approach to cross-document information, following (Ji and Grishman 2008), may be able to further improve performance. References David Ahn. 2006. The stages of event extraction. In Proc. COLING/ACL 2006 Workshop on Annotating and Reasoning about Time and Events. Sydney, Australia. J. Finkel, T. Grenager, and C. Manning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. In Proc. 43rd Annual Meeting of the Association for Computational Linguistics, pages 363–370, Ann Arbor, MI, June. Ralph Grishman, David Westbrook and Adam Meyers. 2005. NYU’s English ACE 2005 System Description. In Proc. ACE 2005 Evaluation Workshop, Gaithersburg, MD. Prashant Gupta, Heng Ji. 2009. Predicting Unknown Time Arguments based on Cross-Event Propagation. In Proc. ACL-IJCNLP 2009. Hilda Hardy, Vika Kanchakouskaya and Tomek Strzalkowski. 2006. Automatic Event Classification Using Surface Text Features. In Proc. AAAI06 Workshop on Event Extraction and Synthesis. Boston, MA. H. Ji and R. Grishman. 2008. Refining Event Extraction through Cross-Document Inference. In Proc. ACL-08: HLT, pages 254–262, Columbus, OH, June. M. Maslennikov and T. Chua. 2007. A Multi resolution Framework for Information Extraction from Free Text. In Proc. 45th Annual Meeting of the Association of Computational Linguistics, pages 592–599, Prague, Czech Republic, June. S. Patwardhan and E. Riloff. 2007. Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions. In Proc. Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2007, pages 717–727, Prague, Czech Republic, June. Patwardhan, S. and Riloff, E. 2009. A Unified Model of Phrasal and Sentential Evidence for Information Extraction. In Proc. Conference on Empirical Methods in Natural Language Processing 2009, (EMNLP-09). David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proc. ACL 1995. Cambridge, MA. 797
2010
81
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 798–805, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Now, where was I? Resumption strategies for an in-vehicle dialogue system Jessica Villing Graduate School of Language Technology and Department of Philosophy, Linguistics and Theory of Science University of Gothenburg [email protected] Abstract In-vehicle dialogue systems often contain more than one application, e.g. a navigation and a telephone application. This means that the user might, for example, interrupt the interaction with the telephone application to ask for directions from the navigation application, and then resume the dialogue with the telephone application. In this paper we present an analysis of interruption and resumption behaviour in human-human in-vehicle dialogues and also propose some implications for resumption strategies in an in-vehicle dialogue system. 1 Introduction Making it useful and enjoyable to use a dialogue system is always important. The dialogue should be easy and intuitive, otherwise the user will not find it worth the effort and instead prefer to use manual controls or to speak to a human. However, when designing an in-vehicle dialogue system there is one more thing that needs to be taken into consideration, namely the fact that the user is performing an additional, safety critical, task - driving. The so-called 100-car study (Neale et al., 2005) revealed that secondary task distraction is the largest cause of driver inattention, and that the handling of wireless devices is the most common secondary task. Even if spoken dialogue systems enables manouvering of devices without using hands or eyes, it is crucial to adjust the interaction to the in-vehicle environment in order to minimize distraction from the interaction itself. Therefore the dialogue system should consider the cognitive load of the driver and adjust the dialogue accordingly. One way of doing this is to continously measure the cognitive workload level of the driver and, if the workload is high, determine type of workload and act accordingly. If the workload is dialogue-induced (i.e. caused by the dialogue itself), it might be necessary to rephrase or offer the user help with the task. If the workload is driving-induced (i.e. caused by the driving task), the user might need information that is crucial for the driving task (e.g. get navigation instructions), or to pause the dialogue in order to enable the user to concentrate on the driving task (Villing, 2009). Both the driver and the system should be able to initiate interruptions. When the interaction with a dialogue system has been interrupted, e.g. because the user has not answered a question, it is common that the system returns to the top menu. This means that if the user wants to finish the interrupted task she has to restart from the beginning, which is both timeconsuming and annoying. Instead, the dialogue system should be able to either pause until the workload is low or change topic and/or domain, and then resume where the interruption took place. However, resumption of an interrupted topic needs to be done in a way that minimizes the risk that the cognitive workload increases again. Although a lot of research has been done regarding dialogue system output, very little work has been done regarding resumption of an interrupted topic. In this paper we will analyse human-human in-vehicle dialogue to find out how resumptions are done in human-human dialogue and propose some implications for resumption strategies in a dialogue system. 2 Related work To study resumption behaviour, Yang (2009), carried out a data collection where the participants were switching between an ongoing task (a card game) and a real-time task (a picture game). The participants randomly had to interrupt the ongoing task to solve a problem in the real-time task. When studying the resumption behaviour after an 798 interruption to the real-time task they found that the resuming utterance contained various amounts and types of redundant information depending on whether the interruption occured in the middle of a card discussion, at the end of a card or at the end of a card game. If the interruption occured in the middle of a card discussion it was possible to make a distinction between utterance restatement (repeat one’s own utterance, repeat the dialogue partners utterance or clarification of the dialogue partners utterance) and card review (reviewing all the cards on hand although this information had already been given). They found that the behaviour is similar to grounding behaviour, where the speaker use repetition and requests for repetition to ensure that the utterance is understood. 3 Data collection A data collection has been carried out within the DICO project (see, for example, (Larsson and Villing, 2007)) to study how an additional distraction or increase in the cognitive load would affect a driver’s dialogue behaviour. The goal was to elicit a natural dialogue (as opposed to giving the driver a constructed task such as for example a math task) and make the participants engage in the conversation. The participants (two female and six male) between the ages of 25 and 36 drove a car in pairs while interviewing each other. The interview questions and the driving instructions were given to the passenger, hence the driver knew neither what questions to discuss nor the route in advance. Therefore, the driver had to signal, implicitly or explicitly, when she wanted driving instructions and when she wanted a new question to discuss. The passenger too had to have a strategy for when to change topic. The reasons for this setup was to elicit a natural and fairly intense dialogue and to force the participants to frequently change topic and/or domain (e.g. to get driving instructions). The participants changed roles after 30 minutes, which meant that each participant acted both as driver and as passenger. The cognitive load of the driver was measured in two ways. The driver performed a Tactile Detection Task (TDT) (van Winsum et al., 1999). When using a TDT, a buzzer is attached to the driver’s wrist. The driver is told to push a button each time the summer is activated. Cognitive load is determined by measuring hit-rate and reaction time. Although the TDT task in itself might cause an increased workload level, the task is performed during the whole session and thereby it is possible to distinguish high workload caused by something else but the TDT task. Workload was also measured by using an IDIS system (Broström et al., 2006). IDIS determines workload based on the driver’s behaviour (for example, steering wheel movements or applying the brake). What differs between the two measurements is that the TDT measures the actual workload of each driver, while IDIS makes its assumptions based on knowledge of what manouvres are usually cognitively demanding. The participants were audio- and videotaped, the recordings are transcribed with the transcription tool ELAN1, using an orthographic transcription. All in all 3590 driver utterances and 4382 passenger utterances are transcribed. An annotation scheme was designed to enable analysis of utterances with respect to topic change for each domain. Domain and topic was defined as: • interview domain: discussions about the interview questions where each interview question was defined as a topic • navigation domain: navigation-related discussions where each navigation instruction was defined as a topic • traffic domain: discussions about the traffic situation and fellow road-users where each comment not belonging to a previous event was defined as a topic • other domain: anything that does not fit within the above domains where each comment not belonging to a previous event was defined as a topic Topic changes has been coded as follows: • begin-topic: whatever →new topic – I.e., the participants start discussing an interview question, a navigation instruction, make a remark about the traffic or anything else that has not been discussed before. • end-topic: finished topic →whatever 1http://www.lat-mpi.eu/tools/elan/ 799 – A topic is considered finished if a question is answered or if an instruction or a remark is confirmed. • interrupt-topic: unfinished topic →whatever – An utterance is considered to interrupt if it belongs to another topic than the previous utterance and the previous topic has not been ended with an end-topic. • resume-topic: whatever →unfinished topic – A topic is considered to be resumed if it has been discussed earlier but was not been finished by an end-topic but instead interrupted with an interrupt-topic. • reraise-topic: whatever →finished topic – A topic is considered to be reraised if it has been discussed before and then been finished with an end-topic. The utterances have been categorised according to the following schema: • DEC: declarative – (“You are a Leo and I am a Gemini”, “This is Ekelund Street”) • INT: interrogative – (“What do you eat for breakfast?”, “Should we go back after this?”) • IMP: imperative – (“Go on!”) • ANS: “yes” or “no” answer (and variations such as “sure, absolutely, nope, no way”) • NP: bare noun phrase – (“Wolfmother”, “Otterhall Street”) • ADVP: bare adverbial phrase – (“Further into Karlavagn Street”) • INC: incomplete phrase – (“Well, did I answer the”, “Should we”) Cognitive load has been annotated as: • reliable workload: annotated when workload is reliably high according to the TDT (reliability was low if response button was pressed more than 2 times after the event). • high: high workload according to IDIS • low: low workload according to IDIS The annotation schema has not been tested for inter-coder reliability. While full reliability testing would have further strengthened the results, we believe that our results are still useful as a basis for future implementation and experimental work. 4 Results The codings from the DICO data collection has been analysed with respect to interruption and resumption of topics (interrupt-topic and resumetopic, respectively). Interruption can be done in two ways, either to pause the dialogue or to change topic and/or domain. In the DICO corpus there are very few interruptions followed by a pause. The reason is probably that both the driver and the passenger were strongly engaged in the interview and navigation tasks. The fact that the driver did not know the route elicited frequent switches to the navigation domain done by both the driver and the passenger, as can be seen in Figure 1. Therefore, we have only analysed interruption and resumption from and to the interview and navigation domains. !" #!" $!" %!" &!" '!!" ()*+,-(+." )/-(" *,/01" 2*3+," Figure 1: Distribution of utterances coded as interrupt-topic for each domain, when interrupting from an interview topic. 4.1 Redundancy The easiest way of resuming an interrupted topic in a dialogue system is to repeat the last phrase that was uttered before the interruption. One disdavantage of this method is that the dialogue system might be seen as tedious, especially if there are several interruptions during the interaction. We wanted to see if the resuming utterances in humanhuman dialogue are redundant and if redundancy has anything to do with the length of the interruption. We therefore sorted all utterances coded 800 as resume-topic in two categories, those which contained redundant information when comparing with the last utterance before the interruption, and those which did not contain and redundant information. As a redundant utterance we counted all utterances that repeated one or more words from the last utterance before the interruption. We then counted the number of turns between the interruption and resumption. The number of turns varied between 1 and 42. The result can be seen in Figure 2. !" #" $!" $#" %!" %#" &'()*"""""" +,-" *.)/01" 2345.6" +#78" *.)/01" 9(/:""""""""""""""""" +;$!" *.)/01" <(/=)34./4>/*" ?34./4>/*" Figure 2: Number of redundant utterances depending on length of interruption. As can be seen, there are twice as many nonredundant as redundant utterances after a short interruption (≤4 turns), while there are almost solely redundant utterances after a long interruption (≥10 turns). The average number of turns is 3,5 when no redundancy occur, and 11,5 when there are redundancy. When the number of turns exceeds 12, there are only redundant utterances. 4.2 Category Figure 3 shows the distribution, sorted per category, of driver utterances when resuming to an interview and a navigation topic. Figure 4 shows the corresponding figures for passenger utterances. !"# $!"# %!"# &!"# '!"# (!"# )*+# ,-+# ,-.# -/# 0-1# 0)2/# 345678369# 4:83# Figure 3: Driver resuming to the interview and navigation domains. The driver’s behaviour is similar both when resuming to an interview and a navigation topic. Declarative phrases are most common, followed by incomplete, interrogative (for interview topics) and noun phrases. !"# $!"# %!"# &!"# '!"# (!"# )*+# ,-+# ,-.# -/# ,0/# 1)2/# 345678369# 4:83# Figure 4: Passenger resuming to the interview and navigation domains. When looking at the passenger utterances we see a lot of variation between the domains. When resuming to an interview topic the passenger uses mostly declarative phrases, followed by noun phrases and interrogative phrases. When resuming to a navigation topic imperative phrases are most common, followed by declarative phrases. Only the passenger use imperative phrases, probably since the passenger is managing both the interview questions and the navigation instructions and therefore is the one that is forcing both the interview and the navigation task through. 4.3 Workload level The in-vehicle environment is forcing the driver to carry out tasks during high cognitive workload. To minimize the risk of increasing the workload further, an in-vehicle dialogue system should be able to decide when to interrupt and when to resume a topic depending on the driver’s workload level. The figures in this section shows workload level and type of workload during interruption and resumption to and from topics in the interview domain. When designing the interview and navigation tasks that were to be carried out during the data collection, we focused on designing them so that the participants were encouraged to discuss as much as possible with each other. Therefore, the navigation instructions sometimes were hard to understand, which forced the participants to discuss the instructions and together try to interpret them. Therefore we have not analysed the workload level while interrupting and resuming topics in the navigation domain since the result might be 801 misleading. Type of workload is determined by analysing the TDT and IDIS signals described in 3. Workload is considered to be dialogue-induced when only the TDT is indicating high workload (since the TDT indicates that the driver is carrying out a task that is cognitively demanding but IDIS is not indicating that the driving task is demanding at the moment), driving-induced when both the TDT and IDIS is indicating high workload (since the TDT is indicating that the workload level is high and IDIS is indicating that the driving task is demanding) and possibly driving-induced when only IDIS is indicating high workload (since IDIS admittedly is indicating that the driving task is demanding but the TDT indicates that the driver’s workload is low, it could then be that this particular driver does not experience the driving task demanding even though the average driver does) (Villing, 2009). The data has been normalized for variation in workload time. The diagrams shows the distribution of interruption and resumption utterances made by the driver and the passenger, respectively. dialogueinduced possibly drivinginduced drivinginduced low workload Page 1 Figure 5: Workload while the driver is interrupting an interview topic. dialogueinduced possibly drivinginduced drivinginduced low workload Figure 6: Workload while the passenger is interrupting an interview topic. Figures 5 and 6 show driver workload level while the driver and the passenger (respectively) are interrupting from the interview domain. The driver most often interrupts during a possible driving-induced or low workload, the same goes for the passenger but in opposite order. It is least common for the driver to interrupt during dialogue- or driving-induced workload, while the passenger rarely interrupts during dialogueinduced and never during driving-induced workload. dialogueinduced possible drivinginduced drivinginduced low workload Page 1 Figure 7: Workload while driver is resuming to the interview domain. dialogueinduced possible drivinginduced drivinginduced low workload Page 1 Figure 8: Workload while passenger is resuming to the interview domain. Figures 7 and 8 show workload level while the driver and the passenger (respectively) are resuming to the interview domain. The driver most often resumes while the workload is low or possibly driving-induced, while the passenger is mostly resuming during low workload and never during driving-induced workload. 5 Discussion For both driver and passenger, the most common way to resume an interview topic is to use a declarative utterance, which is illustrated in Figure 3. When studying the utterances in detail we can see that there is a difference when comparing information redundancy similar to what Yang (2009) describe in their paper. They compared grade of 802 redundancy based on where in the dialogue the interruption occur, what we have looked at in the DICO corpus is how many turns the interrupting discussion contains. As Figure 2 shows, if the number of turns is about three (on average, 3,5), the participants tend to continue the interrupted topic exactly where it was interrupted, without considering that there had been any interruption. The speaker however often makes some sort of sequencing move to announce that he or she is about to switch domain and/or topic, either by using a standard phrase or by making an extra-lingustic sound like, for example, lipsmack or breathing (Villing et al., 2008). Example (1) shows how the driver interrupts a discussion about what book he is currently reading to get navigation instructions: (1) Driver: What I read now is Sofie’s world. Driver (interrupting): Yes, where do you want me to drive? Passenger: Straight ahead, straight ahead. Driver: Straight ahead. Alright, I’ll do that. Passenger (resuming): Alright [sequencing move]. Enemy of the enemy was the last one I read. [DEC] If the number of turns is higher than ten (on average, 11,5) the resuming speaker makes a redundant utterance, repeating one or more words from the last utterance before the interruption. See example (2): (2) Driver: Actually, I have always been interested in computers and technology. Passenger (interrupting): Turn right to Vasaplatsen. Is it here? No, this is Grönsakstorget. Driver: This is Grönsakstorget. We have passed Vasaplatsen. . . . (Discussion about how to turn around and get back to Vasaplatsen, all in all 21 turns.) Driver (resuming): Well, as I said [sequencing move]. I have always been interested in computer and computers and technology and stuff like that. [DEC] The passenger often uses a bare noun phrase to resume, the noun phrase can repeat a part of the interview question. For example, after a discussion about wonders of the world, which was interrupted by a discussion about which way to go next, the passenger resumed by uttering the single word “wonders” which was immediatly understood by the driver as a resumption to the interview topic. The noun phrase can also be a key phrase in the dialogue partner’s answer as in example (3) where the participants discuss their favourite band: (3) Driver: I like Wolfmother, do you know about them? Passenger: I’ve never heard about them. [...] You have to bring a cd so I can listen to them. Driver (interrupting): Where was I supposed to turn? . . . (Navigation discussion, all in all 13 turns.) Passenger (resuming): [LAUGHS]Wolfmother. [NP] When resuming to the navigation domain, the driver mostly uses a declarative phrase, typically to clarify an instruction. It is also common to use an interrogative phrase or an incomplete phrase such as “should I...” which the passenger answers by clarifying which way to go. The passenger instead uses mostly imperative phrases as a reminder of the last instruction, such as “keep straight on”. When the speakers interrupts an interview topic they mostly switch to the navigation domain, see Figure 1. That means that the most common reason for the speaker to interrupt is to ask for or give information that is crucial for the driving task (as opposed for the other and traffic domains, which are mostly used to signal that the speaker’s cognitive load level is high (Villing et al., 2008)). As can be seen in Figures 5 and 6, the driver mostly interrupts the interview domain during a possible driving-induced workload while the passenger mostly interrupts during low workload. As noted above (see also Figure 3), the utterances are mostly declarative (“this is Ekelund Street”), interrogative (“and now I turn left?”) or incomplete (“and then...”), while the passenger gives additional information that the driver has not asked for explicitly but the passenger judges that the driver might need (“just go straight ahead in the next crossing”, “here is where we should turn towards Järntorget”). Hence, it seems like the driver interrupts to make clarification utterances that must be answered immediately, for example, right before a 803 crossing when the driver has pressed the brakes or turned on the turn signal (and therefore the IDIS system signals high workload which is interpreted as driving-induced workload) while the passenger take the chance to give additional information in advance, before it is needed, and the workload therefore is low. Figure 7 shows that the driver mostly resumes to the interview domain during low or possible driving-induced workload. Since the IDIS system makes its assumption on driving behaviour, based on what the average driver finds cognitively demanding, it might sometimes be so that the system overgenerates and indicates high workload even though the driver at hand does not find the driving task cognitively demanding. This might be an explanation to these results, since the driver often resumes to an interview topic although he or she is, for example, driving through a roundabout or pushing the brakes. It is also rather common that the driver is resuming to an interview question during dialogue-induced workload, perhaps because she has started thinking about an answer to a question and therefore the TDT indicates high workload and the IDIS does not. The passenger mostly resumes to the interview domain during low workload, which indicates that the passenger analyses both the traffic situation and the state of mind of the driver before he or she wants to draw the drivers attention from the driving task. 6 Implications for in-vehicle dialogue systems In this paper we point at some of the dialogue strategies that are used in human-human dialogue during high cognitive load when resuming to an interrupted topic. These strategies should be taken under consideration when implementing an invehicle dialogue system. To make the dialogue natural and easy to understand the dialogue manager should consider which domain it will resume to and the number of turns between the interruption and resumption before deciding what phrase to use as output. For example, the results indicate that it might be more suitable to use a declarative phrase when resuming to a domain where the system is asking the user for information, for example when adding songs to a play list at the mp3-player (cf. the interview domain). If the number of turns are 4 or less, it probably does not have to make a redundant utterance at all, but may continue the discussion where it was interrupted. If the number of turns exceeds 4 it is probably smoother to let the system just repeat one or more keywords from the interrupted utterance to make the user understand what topic should be discussed, instead of repeating the whole utterance or even start the task from the beginning. This will make the system feel less tedious which should have a positive effect on the cognitive workload level. However, user tests are probably needed to decide how much redundant information is necessary when talking to a dialogue system, since it may well differ from talking to a human being who is able to help the listener understand by, for example, emphasizing certain words in a way that is currently impossible for a computer. When resuming to a domain where the system has information to give to the user it is suitable to make a short, informative utterance (e.g. “turn left here”, “traffic jam ahead, turn left instead”). Finally, it is also important to consider the cognitive workload level of the user to determine when - and if - to resume, and also whether the topic that is to be resumed belongs to a domain where the system has information to give to the user, or a domain where the user gives information to the system. For example, if the user is using a navigation system and he or she is experiencing driving-induced workload when approaching e.g. a crossing, it might be a good idea to give additional navigation information even though the user has not explicitly asked for it. If the user however is using a telephone application it is probably better to let the user initiate the resumption. The DICO corpus shows that it is the passenger that is most careful not to interrupt or resume when the driver’s workload is high, indicating that the system should let the user decide whether it is suitable to resume during high workload, while it is more accepted to let the system interrupt and resume when the workload is low. When resuming to the interview domain the driver (i.e. the user) mostly uses declarative phrases, either as an answer to a question or as a redundant utterance to clarify what was last said before the interruption. Therefore the dialogue system should be able to store not only what has been agreed upon regarding the interrupted task, but also the last few utterances to make it possible to interpret the user utterance as a resumption. 804 It is common that the driver utterances are incomplete, perhaps due to the fact that the driver’s primary task is the driving and therefore his or her mind is not always set on the dialogue task. Lindström (2008) showed that deletions are the most common disfluency during high cognitive load, which is supported by the results in this paper. The dialogue system should therefore be robust regarding ungrammatical utterances. 7 Future work Next we intend to implement strategies for interruption and resumption in the DICO dialogue system. The strategies will then be evaluated through user tests where the participants will compare an application with these strategies with an application without them. Cognitive workload will be measured as well as driving ability (for example, by using a Lane Change Task (Mattes, 2003)). The participants will also be interviewed in order to find out which version of the system is more pleasant to use. References Robert Broström, Johan Engström, Anders Agnvall, and Gustav Markkula. 2006. Towards the next generation intelligent driver information system (idis): The volvo cars interaction manager concept. In Proceedings of the 2006 ITS World Congress. Staffan Larsson and Jessica Villing. 2007. The dico project: A multimodal menu-based in-vehicle dialogue system. In H C Bunt and E C G Thijsse, editors, Proceedings of the 7th International Workshop on Computational Semantics (IWCS-7), page 4. Anders Lindström, Jessica Villing, Staffan Larsson, Alexander Seward, Nina Åberg, and Cecilia Holtelius. 2008. The effect of cognitive load on disfluencies during in-vehicle spoken dialogue. In Proceedings of Interspeech 2008, page 4. Stefan Mattes. 2003. The lane-change-task as a tool for driver distraction evaluation. In Proceedings of IGfA. V L Neale, T A Dingus, S G Klauer, J Sudweeks, and M Goodman. 2005. An overview of the 100-car naturalistic study and findings. In Proceedings of the 19th International Technical Conference on Enhanced Safety of Vehicles (ESV). W van Winsum, M Martens, and L Herland. 1999. The effect of speech versus tactile driver support messages on workload, driver behaviour and user acceptance. tno-report tm-99-c043. Technical report, Soesterberg, Netherlands. Jessica Villing, Cecilia Holtelius, Staffan Larsson, Anders Lindström, Alexander Seward, and Nina Åberg. 2008. Interruption, resumption and domain switching in in-vehicle dialogue. In Proceedings of GoTAL, 6th International Conference on Natural Language Processing, page 12. Jessica Villing. 2009. In-vehicle dialogue management - towards distinguishing between different types of workload. In Proceedings of SiMPE, Fourth Workshop on Speech in Mobile and Pervasive Environments, pages 14–21. Fan Yang and Peter A Heeman. 2009. Context restoration in multi-tasking dialogue. In IUI ’09: Proceedings of the 13th international conference on Intelligent user interfaces, pages 373–378, New York, NY, USA. ACM. 805
2010
82
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 806–814, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Learning to Follow Navigational Directions Adam Vogel and Dan Jurafsky Department of Computer Science Stanford University {acvogel,jurafsky}@stanford.edu Abstract We present a system that learns to follow navigational natural language directions. Where traditional models learn from linguistic annotation or word distributions, our approach is grounded in the world, learning by apprenticeship from routes through a map paired with English descriptions. Lacking an explicit alignment between the text and the reference path makes it difficult to determine what portions of the language describe which aspects of the route. We learn this correspondence with a reinforcement learning algorithm, using the deviation of the route we follow from the intended path as a reward signal. We demonstrate that our system successfully grounds the meaning of spatial terms like above and south into geometric properties of paths. 1 Introduction Spatial language usage is a vital component for physically grounded language understanding systems. Spoken language interfaces to robotic assistants (Wei et al., 2009) and Geographic Information Systems (Wang et al., 2004) must cope with the inherent ambiguity in spatial descriptions. The semantics of imperative and spatial language is heavily dependent on the physical setting it is situated in, motivating automated learning approaches to acquiring meaning. Traditional accounts of learning typically rely on linguistic annotation (Zettlemoyer and Collins, 2009) or word distributions (Curran, 2003). In contrast, we present an apprenticeship learning system which learns to imitate human instruction following, without linguistic annotation. Solved using a reinforcement learning algorithm, our system acquires the meaning of spatial words through 1. go vertically down until you’re underneath eh diamond mine 2. then eh go right until you’re 3. you’re between springbok and highest viewpoint Figure 1: A path appears on the instruction giver’s map, who describes it to the instruction follower. grounded interaction with the world. This draws on the intuition that children learn to use spatial language through a mixture of observing adult language usage and situated interaction in the world, usually without explicit definitions (Tanz, 1980). Our system learns to follow navigational directions in a route following task. We evaluate our approach on the HCRC Map Task corpus (Anderson et al., 1991), a collection of spoken dialogs describing paths to take through a map. In this setting, two participants, the instruction giver and instruction follower, each have a map composed of named landmarks. Furthermore, the instruction giver has a route drawn on her map, and it is her task to describe the path to the instruction follower, who cannot see the reference path. Our system learns to interpret these navigational directions, without access to explicit linguistic annotation. We frame direction following as an apprenticeship learning problem and solve it with a reinforcement learning algorithm, extending previous work on interpreting instructions by Branavan et al. (2009). Our task is to learn a policy, or mapping 806 from world state to action, which most closely follows the reference route. Our state space combines world and linguistic features, representing both our current position on the map and the communicative content of the utterances we are interpreting. During training we have access to the reference path, which allows us to measure the utility, or reward, for each step of interpretation. Using this reward signal as a form of supervision, we learn a policy to maximize the expected reward on unseen examples. 2 Related Work Levit and Roy (2007) developed a spatial semantics for the Map Task corpus. They represent instructions as Navigational Information Units, which decompose the meaning of an instruction into orthogonal constituents such as the reference object, the type of movement, and quantitative aspect. For example, they represent the meaning of “move two inches toward the house” as a reference object (the house), a path descriptor (towards), and a quantitative aspect (two inches). These representations are then combined to form a path through the map. However, they do not learn these representations from text, leaving natural language processing as an open problem. The semantics in our paper is simpler, eschewing quantitative aspects and path descriptors, and instead focusing on reference objects and frames of reference. This simplifies the learning task, without sacrificing the core of their representation. Learning to follow instructions by interacting with the world was recently introduced by Branavan et al. (2009), who developed a system which learns to follow Windows Help guides. Our reinforcement learning formulation follows closely from their work. Their approach can incorporate expert supervision into the reward function in a similar manner to this paper, but is also able to learn effectively from environment feedback alone. The Map Task corpus is free form conversational English, whereas the Windows instructions are written by a professional. In the Map Task corpus we only observe expert route following behavior, but are not told how portions of the text correspond to parts of the path, leading to a difficult learning problem. The semantics of spatial language has been studied for some time in the linguistics literature. Talmy (1983) classifies the way spatial meaning is Figure 2: The instruction giver and instruction follower face each other, and cannot see each others maps. encoded syntactically, and Fillmore (1997) studies spatial terms as a subset of deictic language, which depends heavily on non-linguistic context. Levinson (2003) conducted a cross-linguistic semantic typology of spatial systems. Levinson categorizes the frames of reference, or spatial coordinate systems1, into 1. Egocentric: Speaker/hearer centered frame of reference. Ex: “the ball to your left”. 2. Allocentric: Speaker independent. Ex: “the road to the north of the house” Levinson further classifies allocentric frames of reference into absolute, which includes the cardinal directions, and intrinsic, which refers to a featured side of an object, such as “the front of the car”. Our spatial feature representation follows this egocentric/allocentric distinction. The intrinsic frame of reference occurs rarely in the Map Task corpus and is ignored, as speakers tend not to mention features of the landmarks beyond their names. Regier (1996) studied the learning of spatial language from static 2-D diagrams, learning to distinguish between terms with a connectionist model. He focused on the meaning of individual terms, pairing a diagram with a given word. In contrast, we learn from whole texts paired with a 1Not all languages exhibit all frames of reference. Terms for ‘up’ and ‘down’ are exhibited in most all languages, while ‘left’ and ‘right’ are absent in some. Gravity breaks the symmetry between ‘up’ and ‘down’ but no such physical distinction exists for ‘left’ and ‘right’, which contributes to the difficulty children have learning them. 807 path, which requires learning the correspondence between text and world. We use similar geometric features as Regier, capturing the allocentric frame of reference. Spatial semantics have also been explored in physically grounded systems. Kuipers (2000) developed the Spatial Semantic Hierarchy, a knowledge representation formalism for representing different levels of granularity in spatial knowledge. It combines sensory, metrical, and topological information in a single framework. Kuipers et al. demonstrate its effectiveness on a physical robot, but did not address the learning problem. More generally, apprenticeship learning is well studied in the reinforcement learning literature, where the goal is to mimic the behavior of an expert in some decision making domain. Notable examples include (Abbeel and Ng, 2004), who train a helicopter controller from pilot demonstration. 3 The Map Task Corpus The HCRC Map Task Corpus (Anderson et al., 1991) is a set of dialogs between an instruction giver and an instruction follower. Each participant has a map with small named landmarks. Additionally, the instruction giver has a path drawn on her map, and must communicate this path to the instruction follower in natural language. Figure 1 shows a portion of the instruction giver’s map and a sample of the instruction giver language which describes part of the path. The Map Task Corpus consists of 128 dialogs, together with 16 different maps. The speech has been transcribed and segmented into utterances, based on the length of pauses. We restrict our attention to just the utterances of the instruction giver, ignoring the instruction follower. This is to reduce redundancy and noise in the data - the instruction follower rarely introduces new information, instead asking for clarification or giving confirmation. The landmarks on the instruction follower map sometimes differ in location from the instruction giver’s. We ignore this caveat, giving the system access to the instruction giver’s landmarks, without the reference path. Our task is to build an automated instruction follower. Whereas the original participants could speak freely, our system does not have the ability to query the instruction giver and must instead rely only on the previously recorded dialogs. Figure 3: Sample state transition. Both actions get credit for visiting the great rock after the indian country. Action a1 also gets credit for passing the great rock on the correct side. 4 Reinforcement Learning Formulation We frame the direction following task as a sequential decision making problem. We interpret utterances in order, where our interpretation is expressed by moving on the map. Our goal is to construct a series of moves in the map which most closely matches the expert path. We define intermediate steps in our interpretation as states in a set S, and interpretive steps as actions drawn from a set A. To measure the fidelity of our path with respect to the expert, we define a reward function R : S × A →R+ which measures the utility of choosing a particular action in a particular state. Executing action a in state s carries us to a new state s′, and we denote this transition function by s′ = T(s, a). All transitions are deterministic in this paper.2 For training we are given a set of dialogs D. Each dialog d ∈ D is segmented into utterances (u1, . . . , um) and is paired with a map, which is composed of a set of named landmarks (l1, . . . , ln). 4.1 State The states of our decision making problem combine both our position in the dialog d and the path we have taken so far on the map. A state s ∈S is composed of s = (ui, l, c), where l is the named landmark we are located next to and c is a cardinal direction drawn from {North, South, East, West} which determines which side of l we are on. Lastly, ui is the utterance in d we are currently interpreting. 2Our learning algorithm is not dependent on a deterministic transition function and can be applied to domains with stochastic transitions, such as robot locomotion. 808 4.2 Action An action a ∈A is composed of a named landmark l, the target of the action, together with a cardinal direction c which determines which side to pass l on. Additionally, a can be the null action, with l = l′ and c = c′. In this case, we interpret an utterance without moving on the map. A target l together with a cardinal direction c determine a point on the map, which is a fixed distance from l in the direction of c. We make the assumption that at most one instruction occurs in a given utterance. This does not always hold true - the instruction giver sometimes chains commands together in a single utterance. 4.3 Transition Executing action a = (l′, c′) in state s = (ui, l, c) leads us to a new state s′ = T(s, a). This transition moves us to the next utterance to interpret, and moves our location to the target of the action. If a is the null action, s = (ui+1, l, c), otherwise s′ = (ui+1, l′, c′). Figure 3 displays the state transitions two different actions. To form a path through the map, we connect these state waypoints with a path planner3 based on A∗, where the landmarks are obstacles. In a physical system, this would be replaced with a robot motion planner. 4.4 Reward We define a reward function R(s, a) which measures the utility of executing action a in state s. We wish to construct a route which follows the expert path as closely as possible. We consider a proposed route P close to the expert path Pe if P visits landmarks in the same order as Pe, and also passes them on the correct side. For a given transition s = (ui, l, c), a = (l′, c′), we have a binary feature indicating if the expert path moves from l to l′. In Figure 3, both a1 and a2 visit the next landmark in the correct order. To measure if an action is to the correct side of a landmark, we have another binary feature indicating if Pe passes l′ on side c. In Figure 3, only a1 passes l′ on the correct side. In addition, we have a feature which counts the number of words in ui which also occur in the name of l′. This encourages us to choose policies which interpret language relevant to a given 3We used the Java Path Planning Library, available at http://www.cs.cmu.edu/˜ggordon/PathPlan/. landmark. Our reward function is a linear combination of these features. 4.5 Policy We formally define an interpretive strategy as a policy π : S →A, a mapping from states to actions. Our goal is to find a policy π which maximizes the expected reward Eπ[R(s, π(s))]. The expected reward of following policy π from state s is referred to as the value of s, expressed as V π(s) = Eπ[R(s, π(s))] (1) When comparing the utilities of executing an action a in a state s, it is useful to define a function Qπ(s, a) = R(s, a) + V π(T(s, a)) = R(s, a) + Qπ(T(s, a), π(s)) (2) which measures the utility of executing a, and following the policy π for the remainder. A given Q function implicitly defines a policy π by π(s) = max a Q(s, a). (3) Basic reinforcement learning methods treat states as atomic entities, in essence estimating V π as a table. However, at test time we are following new directions for a map we haven’t previously seen. Thus, we represent state/action pairs with a feature vector φ(s, a) ∈RK. We then represent the Q function as a linear combination of the features, Q(s, a) = θTφ(s, a) (4) and learn weights θ which most closely approximate the true expected reward. 4.6 Features Our features φ(s, a) are a mixture of world and linguistic information. The linguistic information in our feature representation includes the instruction giver utterance and the names of landmarks on the map. Additionally, we furnish our algorithm with a list of English spatial terms, shown in Table 1. Our feature set includes approximately 200 features. Learning exactly which words influence decision making is difficult; reinforcement learning algorithms have problems with the large, sparse feature vectors common in natural language processing. For a given state s = (u, l, c) and action a = (l′, c′), our feature vector φ(s, a) is composed of the following: 809 above, below, under, underneath, over, bottom, top, up, down, left, right, north, south, east, west, on Table 1: The list of given spatial terms. • Coherence: The number of words w ∈u that occur in the name of l′ • Landmark Locality: Binary feature indicating if l′ is the closest landmark to l • Direction Locality: Binary feature indicating if cardinal direction c′ is the side of l′ closest to (l, c) • Null Action: Binary feature indicating if l′ = NULL • Allocentric Spatial: Binary feature which conjoins the side c we pass the landmark on with each spatial term w ∈u. This allows us to capture that the word above tends to indicate passing to the north of the landmark. • Egocentric Spatial: Binary feature which conjoins the cardinal direction we move in with each spatial term w ∈u. For instance, if (l, c) is above (l′, c′), the direction from our current position is south. We conjoin this direction with each spatial term, giving binary features such as “the word down appears in the utterance and we move to the south”. 5 Approximate Dynamic Programming Given this feature representation, our problem is to find a parameter vector θ ∈RK for which Q(s, a) = θTφ(s, a) most closely approximates E[R(s, a)]. To learn these weights θ we use SARSA (Sutton and Barto, 1998), an online learning algorithm similar to Q-learning (Watkins and Dayan, 1992). Algorithm 1 details the learning algorithm, which we follow here. We iterate over training documents d ∈D. In a given state st, we act according to a probabilistic policy defined in terms of the Q function. After every transition we update θ, which changes how we act in subsequent steps. Exploration is a key issue in any RL algorithm. If we act greedily with respect to our current Q function, we might never visit states which are acInput: Dialog set D Reward function R Feature function φ Transition function T Learning rate αt Output: Feature weights θ 1 Initialize θ to small random values 2 until θ converges do 3 foreach Dialog d ∈D do 4 Initialize s0 = (l1, u1, ∅), a0 ∼Pr(a0|s0; θ) 5 for t = 0; st non-terminal; t++ do 6 Act: st+1 = T(st, at) 7 Decide: at+1 ∼Pr(at+1|st+1; θ) 8 Update: 9 ∆←R(st, at) + θTφ(st+1, at+1) 10 −θTφ(st, at) 11 θ ←θ + αtφ(st, at)∆ 12 end 13 end 14 end 15 return θ Algorithm 1: The SARSA learning algorithm. tually higher in value. We utilize Boltzmann exploration, for which Pr(at|st; θ) = exp( 1 τ θTφ(st, at)) P a′ exp( 1 τ θTφ(st, a′)) (5) The parameter τ is referred to as the temperature, with a higher temperature causing more exploration, and a lower temperature causing more exploitation. In our experiments τ = 2. Acting with this exploration policy, we iterate through the training dialogs, updating our feature weights θ as we go. The update step looks at two successive state transitions. Suppose we are in state st, execute action at, receive reward rt = R(st, at), transition to state st+1, and there choose action at+1. The variables of interest are (st, at, rt, st+1, at+1), which motivates the name SARSA. Our current estimate of the Q function is Q(s, a) = θTφ(s, a). By the Bellman equation, for the true Q function Q(st, at) = R(st, at) + max a′ Q(st+1, a′) (6) After each action, we want to move θ to minimize the temporal difference, R(st, at) + Q(st+1, at+1) −Q(st, at) (7) 810 Map 4g Map 10g Figure 4: Sample output from the SARSA policy. The dashed black line is the reference path and the solid red line is the path the system follows. For each feature φi(st, at), we change θi proportional to this temporal difference, tempered by a learning rate αt. We update θ according to θ = θ+αtφ(st, at)(R(st, at) + θTφ(st+1, at+1) −θTφ(st, at)) (8) Here αt is the learning rate, which decays over time4. In our case, αt = 10 10+t, which was tuned on the training set. We determine convergence of the algorithm by examining the magnitude of updates to θ. We stop the algorithm when ||θt+1 −θt||∞< ϵ (9) 6 Experimental Design We evaluate our system on the Map Task corpus, splitting the corpus into 96 training dialogs and 32 test dialogs. The whole corpus consists of approximately 105,000 word tokens. The maps seen at test time do not occur in the training set, but some of the human participants are present in both. 4To guarantee convergence, we require P t αt = ∞and P t α2 t < ∞. Intuitively, the sum diverging guarantees we can still learn arbitrarily far into the future, and the sum of squares converging guarantees that our updates will converge at some point. 6.1 Evaluation We evaluate how closely the path P generated by our system follows the expert path Pe. We measure this with respect to two metrics: the order in which we visit landmarks and the side we pass them on. To determine the order Pe visits landmarks we compute the minimum distance from Pe to each landmark, and threshold it at a fixed value. To score path P, we compare the order it visits landmarks to the expert path. A transition l →l′ which occurs in P counts as correct if the same transition occurs in Pe. Let |P| be the number of landmark transitions in a path P, and N the number of correct transitions in P. We define the order precision as N/|P|, and the order recall as N/|Pe|. We also evaluate how well we are at passing landmarks on the correct side. We calculate the distance of Pe to each side of the landmark, considering the path to visit a side of the landmark if the distance is below a threshold. This means that a path might be considered to visit multiple sides of a landmark, although in practice it is usu811 Figure 5: This figure shows the relative weights of spatial features organized by spatial word. The top row shows the weights of allocentric (landmark-centered) features. For example, the top left figure shows that when the word above occurs, our policy prefers to go to the north of the target landmark. The bottom row shows the weights of egocentric (absolute) spatial features. The bottom left figure shows that given the word above, our policy prefers to move in a southerly cardinal direction. ally one. If C is the number of landmarks we pass on the correct side, define the side precision as C/|P|, and the side recall as C/|Pe|. 6.2 Comparison Systems The baseline policy simply visits the closest landmark at each step, taking the side of the landmark which is closest. It pays no attention to the direction language. We also compare against the policy gradient learning algorithm of Branavan et al. (2009). They parametrize a probabilistic policy Pr(s|a; θ) as a log-linear model, in a similar fashion to our exploration policy. During training, the learning algorithm adjusts the weights θ according to the gradient of the value function defined by this distribution. Reinforcement learning algorithms can be classified into value based and policy based. Value methods estimate a value function V for each state, then act greedily with respect to it. Policy learning algorithms directly search through the space of policies. SARSA is a value based method, and the policy gradient algorithm is policy based. Visit Order Side P R F1 P R F1 Baseline 28.4 37.2 32.2 46.1 60.3 52.2 PG 31.1 43.9 36.4 49.5 69.9 57.9 SARSA 45.7 51.0 48.2 58.0 64.7 61.2 Table 2: Experimental results. Visit order shows how well we follow the order in which the answer path visits landmarks. ‘Side’ shows how successfully we pass on the correct side of landmarks. 7 Results Table 2 details the quantitative performance of the different algorithms. Both SARSA and the policy gradient method outperform the baseline, but still fall significantly short of expert performance. The baseline policy performs surprisingly well, especially at selecting the correct side to visit a landmark. The disparity between learning approaches and gold standard performance can be attributed to several factors. The language in this corpus is conversational, frequently ungrammatical, and contains troublesome aspects of dialog such as conversational repairs and repetition. Secondly, our action and feature space are relatively primitive, and don’t capture the full range of spatial expression. Path descriptors, such as the difference between around and past are absent, and our feature 812 representation is relatively simple. The SARSA learning algorithm accrues more reward than the policy gradient algorithm. Like most gradient based optimization methods, policy gradient algorithms oftentimes get stuck in local maxima, and are sensitive to the initial conditions. Furthermore, as the size of the feature vector K increases, the space becomes even more difficult to search. There are no guarantees that SARSA has reached the best policy under our feature space, and this is difficult to determine empirically. Thus, some accuracy might be gained by considering different RL algorithms. 8 Discussion Examining the feature weights θ sheds some light on our performance. Figure 5 shows the relative strength of weights for several spatial terms. Recall that the two main classes of spatial features in φ are egocentric (what direction we move in) and allocentric (on which side we pass a landmark), combined with each spatial word. Allocentric terms such as above and below tend to be interpreted as going to the north and south of landmarks, respectively. Interestingly, our system tends to move in the opposite cardinal direction, i.e. the agent moves south in the egocentric frame of reference. This suggests that people use above when we are already above a landmark. South slightly favors passing on the south side of landmarks, and has a heavy tendency to move in a southerly direction. This suggests that south is used more frequently in an egocentric reference frame. Our system has difficulty learning the meaning of right. Right is often used as a conversational filler, and also for dialog alignment, such as “right okay right go vertically up then between the springboks and the highest viewpoint.” Furthermore, right can be used in both an egocentric or allocentric reference frame. Compare “go to the uh right of the mine” which utilizes an allocentric frame, with “right then go eh uh to your right horizontally” which uses an egocentric frame of reference. It is difficult to distinguish between these meanings without syntactic features. 9 Conclusion We presented a reinforcement learning system which learns to interpret natural language directions. Critically, our approach uses no semantic annotation, instead learning directly from human demonstration. It successfully acquires a subset of spatial semantics, using reinforcement learning to derive the correspondence between instruction language and features of paths. While our results are still preliminary, we believe our model represents a significant advance in learning natural language meaning, drawing its supervision from human demonstration rather than word distributions or hand-labeled semantic tags. Framing language acquisition as apprenticeship learning is a fruitful research direction which has the potential to connect the symbolic, linguistic domain to the nonsymbolic, sensory aspects of cognition. Acknowledgments This research was partially supported by the National Science Foundation via a Graduate Research Fellowship to the first author and award IIS-0811974 to the second author and by the Air Force Research Laboratory (AFRL), under prime contract no. FA8750-09-C-0181. Thanks to Michael Levit and Deb Roy for providing digital representations of the maps and a subset of the corpus annotated with their spatial representation. References Pieter Abbeel and Andrew Y. Ng. 2004. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty-first International Conference on Machine Learning. ACM Press. A. Anderson, M. Bader, E. Bard, E. Boyle, G. Doherty, S. Garrod, S. Isard, J. Kowtko, J. Mcallister, J. Miller, C. Sotillo, H. Thompson, and R. Weinert. 1991. The HCRC map task corpus. Language and Speech, 34, pages 351–366. S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In ACL-IJCNLP ’09. James Richard Curran. 2003. From Distributional to Semantic Similarity. Ph.D. thesis, University of Edinburgh. Charles Fillmore. 1997. Lectures on Deixis. Stanford: CSLI Publications. Benjamin Kuipers. 2000. The spatial semantic hierarchy. Artificial Intelligence, 119(1-2):191–233. 813 Stephen Levinson. 2003. Space In Language And Cognition: Explorations In Cognitive Diversity. Cambridge University Press. Michael Levit and Deb Roy. 2007. Interpretation of spatial language in a map navigation task. In IEEE Transactions on Systems, Man, and Cybernetics, Part B, 37(3), pages 667–679. Terry Regier. 1996. The Human Semantic Potential: Spatial Language and Constrained Connectionism. The MIT Press. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. MIT Press. Leonard Talmy. 1983. How language structures space. In Spatial Orientation: Theory, Research, and Application. Christine Tanz. 1980. Studies in the acquisition of deictic terms. Cambridge University Press. Hongmei Wang, Alan M. Maceachren, and Guoray Cai. 2004. Design of human-GIS dialogue for communication of vague spatial concepts. In GIScience. C. J. C. H. Watkins and P. Dayan. 1992. Q-learning. Machine Learning, pages 8:279–292. Yuan Wei, Emma Brunskill, Thomas Kollar, and Nicholas Roy. 2009. Where to go: interpreting natural directions using global inference. In ICRA’09: Proceedings of the 2009 IEEE international conference on Robotics and Automation, pages 3761– 3767, Piscataway, NJ, USA. IEEE Press. Luke S. Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sentences to logical form. In ACL-IJCNLP ’09, pages 976–984. 814
2010
83
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815–824, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Hybrid Hierarchical Model for Multi-Document Summarization Asli Celikyilmaz Computer Science Department University of California, Berkeley [email protected] Dilek Hakkani-Tur International Computer Science Institute Berkeley, CA [email protected] Abstract Scoring sentences in documents given abstract summaries created by humans is important in extractive multi-document summarization. In this paper, we formulate extractive summarization as a two step learning problem building a generative model for pattern discovery and a regression model for inference. We calculate scores for sentences in document clusters based on their latent characteristics using a hierarchical topic model. Then, using these scores, we train a regression model based on the lexical and structural characteristics of the sentences, and use the model to score sentences of new documents to form a summary. Our system advances current state-of-the-art improving ROUGE scores by ∼7%. Generated summaries are less redundant and more coherent based upon manual quality evaluations. 1 Introduction Extractive approach to multi-document summarization (MDS) produces a summary by selecting sentences from original documents. Document Understanding Conferences (DUC), now TAC, fosters the effort on building MDS systems, which take document clusters (documents on a same topic) and description of the desired summary focus as input and output a word length limited summary. Human summaries are provided for training summarization models and measuring the performance of machine generated summaries. Extractive summarization methods can be classified into two groups: supervised methods that rely on provided document-summary pairs, and unsupervised methods based upon properties derived from document clusters. Supervised methods treat the summarization task as a classification/regression problem, e.g., (Shen et al., 2007; Yeh et al., 2005). Each candidate sentence is classified as summary or non-summary based on the features that they pose and those with highest scores are selected. Unsupervised methods aim to score sentences based on semantic groupings extracted from documents, e.g., (Daum´eIII and Marcu, 2006; Titov and McDonald, 2008; Tang et al., 2009; Haghighi and Vanderwende, 2009; Radev et al., 2004; Branavan et al., 2009), etc. Such models can yield comparable or better performance on DUC and other evaluations, since representing documents as topic distributions rather than bags of words diminishes the effect of lexical variability. To the best of our knowledge, there is no previous research which utilizes the best features of both approaches for MDS as presented in this paper. In this paper, we present a novel approach that formulates MDS as a prediction problem based on a two-step hybrid model: a generative model for hierarchical topic discovery and a regression model for inference. We investigate if a hierarchical model can be adopted to discover salient characteristics of sentences organized into hierarchies utilizing human generated summary text. We present a probabilistic topic model on sentence level building on hierarchical Latent Dirichlet Allocation (hLDA) (Blei et al., 2003a), which is a generalization of LDA (Blei et al., 2003b). We construct a hybrid learning algorithm by extracting salient features to characterize summary sentences, and implement a regression model for inference (Fig.3). Contributions of this work are: −construction of hierarchical probabilistic model designed to discover the topic structures of all sentences. Our focus is on identifying similarities of candidate sentences to summary sentences using a novel tree based sentence scoring algorithm, concerning topic distributions at different levels of the discovered hierarchy as described in § 3 and § 4, −representation of sentences by meta-features to 815 characterize their candidacy for inclusion in summary text. Our aim is to find features that can best represent summary sentences as described in § 5, −implementation of a feasible inference method based on a regression model to enable scoring of sentences in test document clusters without retraining, (which has not been investigated in generative summarization models) described in § 5.2. We show in § 6 that our hybrid summarizer achieves comparable (if not better) ROUGE score on the challenging task of extracting the summaries of multiple newswire documents. The human evaluations confirm that our hybrid model can produce coherent and non-redundant summaries. 2 Background and Motivation There are many studies on the principles governing multi-document summarization to produce coherent and semantically relevant summaries. Previous work (Nenkova and Vanderwende, 2005; Conroy et al., 2006), focused on the fact that frequency of words plays an important factor. While, earlier work on summarization depend on a word score function, which is used to measure sentence rank scores based on (semi-)supervised learning methods, recent trend of purely data-driven methods, (Barzilay and Lee, 2004; Daum´eIII and Marcu, 2006; Tang et al., 2009; Haghighi and Vanderwende, 2009), have shown remarkable improvements. Our work builds on both methods by constructing a hybrid approach to summarization. Our objective is to discover from document clusters, the latent topics that are organized into hierarchies following (Haghighi and Vanderwende, 2009). A hierarchical model is particularly appealing to summarization than a ”flat” model, e.g. LDA (Blei et al., 2003b), in that one can discover ”abstract” and ”specific” topics. For instance, discovering that ”baseball” and ”football” are both contained in an abstract class ”sports” can help to identify summary sentences. It follows that summary topics are commonly shared by many documents, while specific topics are more likely to be mentioned in rather a small subset of documents. Feature based learning approaches to summarization methods discover salient features by measuring similarity between candidate sentences and summary sentences (Nenkova and Vanderwende, 2005; Conroy et al., 2006). While such methods are effective in extractive summarization, the fact that some of these methods are based on greedy algorithms can limit the application areas. Moreover, using information on the hidden semantic structure of document clusters would improve the performance of these methods. Recent studies focused on the discovery of latent topics of document sets in extracting summaries. In these models, the challenges of inferring topics of test documents are not addressed in detail. One of the challenges of using a previously trained topic model is that the new document might have a totally new vocabulary or may include many other specific topics, which may or may not exist in the trained model. A common method is to re-build a topic model for new sets of documents (Haghighi and Vanderwende, 2009), which has proven to produce coherent summaries. An alternative yet feasible solution, presented in this work, is building a model that can summarize new document clusters using characteristics of topic distributions of training documents. Our approach differs from the early work, in that, we combine a generative hierarchical model and regression model to score sentences in new documents, eliminating the need for building a generative model for new document clusters. 3 Summary-Focused Hierarchical Model Our MDS system, hybrid hierarchical summarizer, HybHSum, is based on an hybrid learning approach to extract sentences for generating summary. We discover hidden topic distributions of sentences in a given document cluster along with provided summary sentences based on hLDA described in (Blei et al., 2003a)1. We build a summary-focused hierarchical probabilistic topic model, sumHLDA, for each document cluster at sentence level, because it enables capturing expected topic distributions in given sentences directly from the model. Besides, document clusters contain a relatively small number of documents, which may limit the variability of topics if they are evaluated on the document level. As described in § 4, we present a new method for scoring candidate sentences from this hierarchical structure. Let a given document cluster D be represented with sentences O={om}|O| m=1 and its corresponding human summary be represented with sentences S={sn}|S| n=1. All sentences are comprised of words V =  w1, w2, ..w|V | in {O ∪S}. 1Please refer to (Blei et al., 2003b) and (Blei et al., 2003a) for details and demonstrations of topic models. 816 Summary hLDA (sumHLDA): The hLDA represents distribution of topics in sentences by organizing topics into a tree of a fixed depth L (Fig.1.a). Each candidate sentence om is assigned to a path com in the tree and each word wi in a given sentence is assigned to a hidden topic zom at a level l of com. Each node is associated with a topic distribution over words. The sampler method alternates between choosing a new path for each sentence through the tree and assigning each word in each sentence to a topic along that path. The structure of tree is learnt along with the topics using a nested Chinese restaurant process (nCRP) (Blei et al., 2003a), which is used as a prior. The nCRP is a stochastic process, which assigns probability distributions to infinitely branching and infinitely deep trees. In our model, nCRP specifies a distribution of words into paths in an L-level tree. The assignments of sentences to paths are sampled sequentially: The first sentence takes the initial L-level path, starting with a single branch tree. Later, mth subsequent sentence is assigned to a path drawn from the distribution: p(pathold, c|m, mc) = mc γ+m−1 p(pathnew, c|m, mc) = γ γ+m−1 (1) pathold and pathnew represent an existing and novel (branch) path consecutively, mc is the number of previous sentences assigned to path c, m is the total number of sentences seen so far, and γ is a hyper-parameter which controls the probability of creating new paths. Based on this probability each node can branch out a different number of child nodes proportional to γ. Small values of γ suppress the number of branches. Summary sentences generally comprise abstract concepts of the content. With sumHLDA we want to capture these abstract concepts in candidate sentences. The idea is to represent each path shared by similar candidate sentences with representative summary sentence(s). We let summary sentences share existing paths generated by similar candidate sentences instead of sampling new paths and influence the tree structure by introducing two separate hyper-parameters for nCRP prior: • if a summary sentence is sampled, use γ = γs, • if a candidate sentence is sampled, use γ = γo. At each node, we let summary sentences sample a path by choosing only from the existing children of that node with a probability proportional to the number of other sentences assigned to that child. This can be achieved by using a small value for γs (0 < γs ≪1). We only let candidate sentences to have an option of creating a new child node with a probability proportional to γo. By choosing γs ≪γo we suppress the generation of new branches for summary sentences and modify the γ of nCRP prior in Eq.(1) using γs and γo hyperparameters for different sentence types. In the experiments, we discuss the effects of this modification on the hierarchical topic tree. The following is the generative process for sumHLDA used in our HybHSum : (1) For each topic k ∈T, sample a distribution βk ∽Dirichlet(η). (2) For each sentence d ∈{O ∪S}, (a) if d ∈O, draw a path cd ∽nCRP(γo), else if d ∈S, draw a path cd ∽nCRP(γs). (b) Sample L-vector θd mixing weights from Dirichlet distribution θd ∼Dir(α). (c) For each word n, choose: (i) level zd,n|θd and (ii) word wd,n| {zd,n, cd, β} Given sentence d, θd is a vector of topic proportions from L dimensional Dirichlet parameterized by α (distribution over levels in the tree.) The nth word of d is sampled by first choosing a level zd,n = l from the discrete distribution θd with probability θd,l. Dirichlet parameter η and γo control the size of tree effecting the number of topics. (Small values of γs do not effect the tree.) Large values of η favor more topics (Blei et al., 2003a). Model Learning: Gibbs sampling is a common method to fit the hLDA models. The aim is to obtain the following samples from the posterior of: (i) the latent tree T, (ii) the level assignment z for all words, (iii) the path assignments c for all sentences conditioned on the observed words w. Given the assignment of words w to levels z and assignments of sentences to paths c, the expected posterior probability of a particular word w at a given topic z=l of a path c=c is proportional to the number of times w was generated by that topic: p(w|z, c, w, η) ∝n(z=l,c=c,w=w) + η (2) Similarly, posterior probability of a particular topic z in a given sentence d is proportional to number of times z was generated by that sentence: p(z|z, c, α) ∝n(c=cd,z=l) + α (3) n(.) is the count of elements of an array satisfying the condition. Note from Eq.(3) that two sentences d1 and d2 on the same path c would have 817 different words, and hence different posterior topic probabilities. Posterior probabilities are normalized with total counts and their hyperparameters. 4 Tree-Based Sentence Scoring The sumHLDA constructs a hierarchical tree structure of candidate sentences (per document cluster) by positioning summary sentences on the tree. Each sentence is represented by a path in the tree, and each path can be shared by many sentences. The assumption is that sentences sharing the same path should be more similar to each other because they share the same topics. Moreover, if a path includes a summary sentence, then candidate sentences on that path are more likely to be selected for summary text. In particular, the similarity of a candidate sentence om to a summary sentence sn sharing the same path is a measure of strength, indicating how likely om is to be included in the generated summary (Algorithm 1): Let com be the path for a given om. We find summary sentences that share the same path with om via: M = {sn ∈S|csn = com}. The score of each sentence is calculated by similarity to the best matching summary sentence in M: score(om) = maxsn∈M sim(om, sn) (4) If M=ø, then score(om)=ø. The efficiency of our similarity measure in identifying the best matching summary sentence, is tied to how expressive the extracted topics of our sumHLDA models are. Given path com, we calculate the similarity of om to each sn, n=1..|M| by measuring similarities on: ⋆sparse unigram distributions (sim1) at each topic l on com: similarity between p(wom,l|zom = l, com, vl) and p(wsn,l|zsn = l, com, vl) ⋆⋆distributions of topic proportions (sim2); similarity between p(zom|com) and p(zsn|com). −sim1: We define two sparse (discrete) unigram distributions for candidate om and summary sn at each node l on a vocabulary identified with words generated by the topic at that node, vl ⊂V . Given wom =  w1, ..., w|om| , let wom,l ⊂wom be the set of words in om that are generated from topic zom at level l on path com. The discrete unigram distribution poml = p(wom,l|zom = l, com, vl) represents the probability over all words vl assigned to topic zom at level l, by sampling only for words in wom,l. Similarly, psn,l = p(wsn,l|zsn, com, vl) is the probability of words wsn in sn of the same topic. The probability of each word in pom,l and psn,l are obtained using Eq. (2) and then normalized (see Fig.1.b). Algorithm 1 Tree-Based Sentence Scoring 1: Given tree T from sumHLDA, candidate and summary sentences: O = {o1, ..., om} , S = {s1, ..., sn} 2: for sentences m ←1, ..., |O| do 3: - Find path com on tree T and summary sentences 4: on path com: M = {sn ∈S|csn = com} 5: for summary sentences n ←1, ..., |M| do 6: - Find score(om)=maxsn sim(om, sn), 7: where sim(om, sn) = sim1 ∗sim2 8: using Eq.(7) and Eq.(8) 9: end for 10: end for 11: Obtain scores Y = {score(om)}|O| m=1 The similarity between pom,l and psn,l is obtained by first calculating the divergence with information radius- IR based on KullbackLiebler(KL) divergence, p=pom,l, q=psn,l : IRcom,l(pom,l, psn,l)=KL(p|| p+q 2 )+KL(q|| p+q 2 ) (5) where, KL(p||q)=P i pi log pi qi . Then the divergence is transformed into a similarity measure (Manning and Schuetze, 1999): Wcom,l(pom,l, psn,l) = 10−IRcom ,l(pom,l,psn,l) (6) IR is a measure of total divergence from the average, representing how much information is lost when two distributions p and q are described in terms of average distributions. We opted for IR instead of the commonly used KL because with IR there is no problem with infinite values since pi+qi 2 ̸=0 if either pi ̸=0 or qi̸=0. Moreover, unlike KL, IR is symmetric, i.e., KL(p,q)̸=KL(q,p). Finally sim1 is obtained by average similarity of sentences using Eq.(6) at each level of com by: sim1(om, sn) = 1 L PL l=1 Wcom,l(pom,l, psn,l) ∗l (7) The similarity between pom,l and psn,l at each level is weighted proportional to the level l because the similarity between sentences should be rewarded if there is a specific word overlap at child nodes. −sim2: We introduce another measure based on sentence-topic mixing proportions to calculate the concept-based similarities between om and sn. We calculate the topic proportions of om and sn, represented by pzom = p(zom|com) and pzsn = p(zsn|com) via Eq.(3). The similarity between the distributions is then measured with transformed IR 818 (a) Snapshot of Hierarchical Topic Structure of a document cluster on “global warming”. (Duc06) z1 z2 z3 z z1 z2 z3 z Posterior Topic Distributions vz1 z3 . .. . . . . . . . w5 z2 w8 . . . ... .. w2 . z1 w5 ..... . . w7 w1 Posterior Topic-Word Distributions candidate om summary sn (b) Magnified view of sample path c [z1,z2,z3] showing om={w1,w2,w3,w4,w5} and sn={w1,w2,w6,w7,w8} ...... z1 zK-1 zK z4 z2 z3 human warming incidence research global predict health change disease forecast temperature slow malaria sneeze starving middle-east siberia om: “Global1 warming2 may rise3 incidence4 of malaria5.” sn:“Global1 warming2 effects6 human7 health8.” level:3 level:1 level:2 vz1 vz2 vz2 vz3 vz3 w1w5w6 w7.... w2 w8 .... w5 .... w5 .... w6 w1w5w6 w7.... . w2 w8 .... . pom z p sn z p(w |z1, c ) sn,1 sn p(w |z1, c ) om,1 om p(w |z2, c ) sn,2 sn p(w |z2, c ) om,2 om p(w |z3, c ) sn,3 sn p(w |z3, c ) om,3 om Figure 1: (a) A sample 3-level tree using sumHLDA. Each sentence is associated with a path c through the hierarchy, where each node zl,c is associated with a distribution over terms (Most probable terms are illustrated). (b) magnified view of a path (darker nodes) in (a). Distribution of words in given two sentences, a candidate (om) and a summary (sn) using sub-vocabulary of words at each topic vzl. Discrete distributions on the left are topic mixtures for each sentence, pzom and pzsn . as in Eq.(6) by: sim2 (om, sn) = 10−IRcom(pzom ,pzsn) (8) sim1 provides information about the similarity between two sentences, om and sn based on topicword distributions. Similarly, sim2 provides information on the similarity between the weights of the topics in each sentence. They jointly effect the sentence score and are combined in one measure: sim(om, sn) = sim1(om, sn) ∗sim2 (om, sn) (9) The final score for a given om is calculated from Eq.(4). Fig.1.b depicts a sample path illustrating sparse unigram distributions of om and sm at each level as well as their topic proportions, pzom, and pzsn. In experiment 3, we discuss the effect of our tree-based scoring on summarization performance in comparison to a classical scoring method presented as our baseline model. 5 Regression Model Each candidate sentence om, m = 1..|O| is represented with a multi-dimensional vector of q features fm = {fm1, ..., fmq}. We build a regression model using sentence scores as output and selected salient features as input variables described below: 5.1 Feature Extraction We compile our training dataset using sentences from different document clusters, which do not necessarily share vocabularies. Thus, we create ngram meta-features to represent sentences instead of word n-gram frequencies: (I) nGram Meta-Features (NMF): For each document cluster D, we identify most frequent (non-stop word) unigrams, i.e., vfreq = {wi}r i=1 ⊂ V , where r is a model parameter of number of most frequent unigram features. We measure observed unigram probabilities for each wi ∈vfreq with pD(wi) = nD(wi)/ P|V | j=1 nD(wj), where nD(wi) is the number of times wi appears in D and |V | is the total number of unigrams. For any ith feature, the value is fmi = 0, if given sentence does not contain wi, otherwise fmi = pD(wi). These features can be extended for any n-grams. We similarly include bigram features in the experiments. (II) Document Word Frequency MetaFeatures (DMF): The characteristics of sentences at the document level can be important in summary generation. DMF identify whether a word in a given sentence is specific to the document in consideration or it is commonly used in the document cluster. This is important because summary sentences usually contain abstract terms rather than specific terms. To characterize this feature, we re-use the r most frequent unigrams, i.e., wi ∈vfreq. Given sentence om, let d be the document that om belongs to, i.e., om ∈d. We measure unigram probabilities for each wi by p(wi ∈om) = nd(wi ∈ om)/nD(wi), where nd(wi ∈om) is the number of times wi appears in d and nD(wi) is the number of times wi appears in D. For any ith feature, the value is fmi = 0, if given sentence does not contain wi, otherwise fmi = p(wi ∈om). We also include bigram extensions of DMF features. 819 (III) Other Features (OF): Term frequency of sentences such as SUMBASIC are proven to be good predictors in sentence scoring (Nenkova and Vanderwende, 2005). We measure the average unigram probability of a sentence by: p(om) = P w∈om 1 |om|PD(w), where PD(w) is the observed unigram probability in the document collection D and |om| is the total number of words in om. We use sentence bigram frequency, sentence rank in a document, and sentence size as additional features. 5.2 Predicting Scores for New Sentences Due to the large feature space to explore, we chose to work with support vector regression (SVR) (Drucker et al., 1997) as the learning algorithm to predict sentence scores. Given training sentences {fm, ym}|O| m=1, where fm = {fm1, ..., fmq} is a multi-dimensional vector of features and ym=score(om)∈R are their scores obtained via Eq.(4), we train a regression model. In experiments we use non-linear Gaussian kernel for SVR. Once the SVR model is trained, we use it to predict the scores of ntest number of sentences in test (unseen) document clusters, Otest =  o1, ...o|Otest| . Our HybHSum captures the sentence characteristics with a regression model using sentences in different document clusters. At test time, this valuable information is used to score testing sentences. Redundancy Elimination: To eliminate redundant sentences in the generated summary, we incrementally add onto the summary the highest ranked sentence om and check if om significantly repeats the information already included in the summary until the algorithm reaches word count limit. We use a word overlap measure between sentences normalized to sentence length. A om is discarded if its similarity to any of the previously selected sentences is greater than a threshold identified by a greedy search on the training dataset. 6 Experiments and Discussions In this section we describe a number of experiments using our hybrid model on 100 document clusters each containing 25 news articles from DUC2005-2006 tasks. We evaluate the performance of HybHSum using 45 document clusters each containing 25 news articles from DUC2007 task. From these sets, we collected ∽80K and ∽25K sentences to compile training and testing data respectively. The task is to create max. 250 word long summary for each document cluster. We use Gibbs sampling for inference in hLDA and sumHLDA. The hLDA is used to capture abstraction and specificity of words in documents (Blei et al., 2009). Contrary to typical hLDA models, to efficiently represent sentences in summarization task, we set ascending values for Dirichlet hyper-parameter η as the level increases, encouraging mid to low level distributions to generate as many words as in higher levels, e.g., for a tree of depth=3, η = {0.125, 0.5, 1}. This causes sentences share paths only when they include similar concepts, starting higher level topics of the tree. For SVR, we set ϵ = 0.1 using the default choice, which is the inverse of the average of φ(f)T φ(f) (Joachims, 1999), dot product of kernelized input vectors. We use greedy optimization during training based on ROUGE scores to find best regularizer C =  10−1..102 using the Gaussian kernel. We applied feature extraction of § 5.1 to compile the training and testing datasets. ROUGE is used for performance measure (Lin and Hovy, 2003; Lin, 2004), which evaluates summaries based on the maxium number of overlapping units between generated summary text and a set of human summaries. We use R-1 (recall against unigrams), R-2 (recall against bigrams), and R-SU4 (recall against skip-4 bigrams). Experiment 1: sumHLDA Parameter Analysis: In sumHLDA we introduce a prior different than the standard nested CRP (nCRP). Here, we illustrate that this prior is practical in learning hierarchical topics for summarization task. We use sentences from the human generated summaries during the discovery of hierarchical topics of sentences in document clusters. Since summary sentences generally contain abstract words, they are indicative of sentences in documents and should produce minimal amount of new topics (if not none). To implement this, in nCRP prior of sumHLDA, we use dual hyper-parameters and choose a very small value for summary sentences, γs = 10e−4 ≪γo. We compare the results to hLDA (Blei et al., 2003a) with nCRP prior which uses only one free parameter, γ. To analyze this prior, we generate a corpus of ∽1300 sentences of a document cluster in DUC2005. We repeated the experiment for 9 other clusters of similar size and averaged the total number of generated topics. We show results for different values of γ and γo hyper-parameters and tree depths. 820 γ = γo 0.1 1 10 depth 3 5 8 3 5 8 3 5 8 hLDA 3 5 8 41 267 1509 1522 4080 8015 sumHLDA 3 5 8 27 162 671 1207 3598 7050 Table 1: Average # of topics per document cluster from sumHLDA and hLDA for different γ and γo and tree depths. γs = 10e−4 is used for sumHLDA for each depth. Features Baseline HybHSum R-1 R-2 R-SU4 R-1 R-2 R-SU4 NMF (1) 40.3 7.8 13.7 41.6 8.4 12.3 DMF (2) 41.3 7.5 14.3 41.3 8.0 13.9 OF (3) 40.3 7.4 13.7 42.4 8.0 14.4 (1+2) 41.5 7.9 14.0 41.8 8.5 14.5 (1+3) 40.8 7.5 13.8 41.6 8.2 14.1 (2+3) 40.7 7.4 13.8 42.7 8.7 14.9 (1+2+3) 41.4 8.1 13.7 43.0 9.1 15.1 Table 2: ROUGE results (with stop-words) on DUC2006 for different features and methods. Results in bold show statistical significance over baseline in corresponding metric. As shown in Table 1, the nCRP prior for sumHLDA is more effective than hLDA prior in the summarization task. Less number of topics(nodes) in sumHLDA suggests that summary sentences share pre-existing paths and no new paths or nodes are sampled for them. We also observe that using γo = 0.1 causes the model to generate minimum number of topics (# of topics=depth), while setting γo = 10 creates excessive amount of topics. γ0 = 1 gives reasonable number of topics, thus we use this value for the rest of the experiments. In experiment 3, we use both nCRP priors in HybHSum to analyze whether there is any performance gain with the new prior. Experiment 2: Feature Selection Analysis Here we test individual contribution of each set of features on our HybHSum (using sumHLDA). We use a Baseline by replacing the scoring algorithm of HybHSum with a simple cosine distance measure. The score of a candidate sentence is the cosine similarity to the maximum matching summary sentence. Later, we build a regression model with the same features as our HybHSum to create a summary. We train models with DUC2005 and evaluate performance on DUC2006 documents for different parameter values as shown in Table 2. As presented in § 5, NMF is the bundle of frequency based meta-features on document cluster level, DMF is a bundle of frequency based metafeatures on individual document level and OF represents sentence term frequency, location, and size features. In comparison to the baseline, OF has a significant effect on the ROUGE scores. In addition, DMF together with OF has shown to improve all scores, in comparison to baseline, on average by 10%. Although the NMF have minimal individual improvement, all these features can statistically improve R-2 without stop words by 12% (significance is measured by t-test statistics). Experiment 3: ROUGE Evaluations We use the following multi-document summarization models along with the Baseline presented in Experiment 2 to evaluate HybSumm. ⋆ PYTHY : (Toutanova et al., 2007) A stateof-the-art supervised summarization system that ranked first in overall ROUGE evaluations in DUC2007. Similar to HybHSum, human generated summaries are used to train a sentence ranking system using a classifier model. ⋆ HIERSUM : (Haghighi and Vanderwende, 2009) A generative summarization method based on topic models, which uses sentences as an additional level. Using an approximation for inference, sentences are greedily added to a summary so long as they decrease KL-divergence. ⋆HybFSum (Hybrid Flat Summarizer): To investigate the performance of hierarchical topic model, we build another hybrid model using flat LDA (Blei et al., 2003b). In LDA each sentence is a superposition of all K topics with sentence specific weights, there is no hierarchical relation between topics. We keep the parameters and the features of the regression model of hierarchical HybHSum intact for consistency. We only change the sentence scoring method. Instead of the new tree-based sentence scoring (§ 4), we present a similar method using topics from LDA on sentence level. Note that in LDA the topic-word distributions φ are over entire vocabulary, and topic mixing proportions for sentences θ are over all the topics discovered from sentences in a document cluster. Hence, we define sim1 and sim2 measures for LDA using topic-word proportions φ (in place of discrete topic-word distributions from each level in Eq.2) and topic mixing weights θ in sentences (in place of topic proportions in Eq.3) respectively. Maximum matching score is calculated as same as in HybHSum. ⋆HybHSum1 and HybHSum2: To analyze the effect of the new nCRP prior of sumHLDA on sum821 ROUGE w/o stop words w/ stop words R-1 R-2 R-4 R-1 R-2 R-4 Baseline 32.4 7.4 10.6 41.0 9.3 15.2 PYTHY 35.7 8.9 12.1 42.6 11.9 16.8 HIERSUM 33.8 9.3 11.6 42.4 11.8 16.7 HybFSum 34.5 8.6 10.9 43.6 9.5 15.7 HybHSum1 34.0 7.9 11.5 44.8 11.0 16.7 HybHSum2 35.1 8.3 11.8 45.6 11.4 17.2 Table 3: ROUGE results of the best systems on DUC2007 dataset (best results are bolded.) marization model performance, we build two different versions of our hybrid model: HybHSum1 using standard hLDA (Blei et al., 2003a) and HybHSum2 using our sumHLDA. The ROUGE results are shown in Table 3. The HybHSum2 achieves the best performance on R1 and R-4 and comparable on R-2. When stop words are used the HybHSum2 outperforms stateof-the-art by 2.5-7% except R-2 (with statistical significance). Note that R-2 is a measure of bigram recall and sumHLDA of HybHSum2 is built on unigrams rather than bigrams. Compared to the HybFSum built on LDA, both HybHSum1&2 yield better performance indicating the effectiveness of using hierarchical topic model in summarization task. HybHSum2 appear to be less redundant than HybFSum capturing not only common terms but also specific words in Fig. 2, due to the new hierarchical tree-based sentence scoring which characterizes sentences on deeper level. Similarly, HybHSum1&2 far exceeds baseline built on simple classifier. The results justify the performance gain by using our novel tree-based scoring method. Although the ROUGE scores for HybHSum1 and HybHSum2 are not significantly different, the sumHLDA is more suitable for summarization tasks than hLDA. HybHSum2 is comparable to (if not better than) fully generative HIERSUM. This indicates that with our regression model built on training data, summaries can be efficiently generated for test documents (suitable for online systems). Experiment 4: Manual Evaluations Here, we manually evaluate quality of summaries, a common DUC task. Human annotators are given two sets of summary text for each document set, generated from two approaches: best hierarchical hybrid HybHSum2 and flat hybrid HybFSum models, and are asked to mark the better summary New federal rules for organic food will assure consumers that the products are grown and processed to the same standards nationwide. But as sales grew more than 20 percent a year through the 1990s, organic food came to account for $1 of every $100 spent on food, and in 1997 t h e a g e n c y t o o k n o t i c e , proposing national organic standards for all food. By the year 2001, organic products are projected to command 5 percent of total food sales in the United States. The sale of organics rose by about 30 percent last year, driven by concerns over food safety, the environment and a fear of genetically engineered food. U.S. sales of organic foods have grown by 20 percent annually for the last seven years. (c) HybFSum Output (b) HybHSum2 Output The Agriculture Department began to propose standards for all organic foods in the late 1990's because their sale had grown more than 20 per cent a year in that decade. In January 1999 the USDA approved a "certified organic" label for meats and poultry that were raised without growth hormones, pesticide-treated feed, and antibiotics. (a) Ref. Output word organic 6 6 6 genetic 2 4 3 allow 2 2 1 agriculture 1 1 1 standard 5 7 0 sludge 1 1 0 federal 1 1 0 bar 1 1 0 certified 1 1 0 specific HybHSum2 HybFSum Ref Figure 2: Example summary text generated by systems compared in Experiment 3. (Id:D0744 in DUC2007). Ref. is the human generated summary. Criteria HybFSum HybHSum2 Tie Non-redundancy 26 44 22 Coherence 24 56 12 Focus 24 56 12 Responsiveness 30 50 12 Overall 24 66 2 Table 4: Frequency results of manual quality evaluations. Results are statistically significant based on t-test. Tie indicates evaluations where two summaries are rated equal. according to five criteria: non-redundancy (which summary is less redundant), coherence (which summary is more coherent), focus and readability (content and not include unnecessary details), responsiveness and overall performance. We asked 4 annotators to rate DUC2007 predicted summaries (45 summary pairs per annotator). A total of 92 pairs are judged and evaluation results in frequencies are shown in Table 4. The participants rated HybHSum2 generated summaries more coherent and focused compared to HybFSum. All results in Table 4 are statistically significant (based on t-test on 95% confidence level.) indicating that HybHSum2 summaries are rated significantly better. 822 ... Document Cluster1 ... Document Cluster2 ... Document Clustern ... ... f1 f2 f3 fq f-input features ... f1 f2 f3 fq f-input features ... f1 f2 f3 fq f-input features h(f,y) : regression model for sentence ranking ... ... ... .. z zK z z z z sumHLDA ... ... .. z zK z z z z sumHLDA ... ... .. z zK z z z z sumHLDA ... ... y-output candidate sentence scores 0.02 0.01 0.0 . . y-output candidate sentence scores 0.35 0.09 0.01 . . y-output candidate sentence scores 0.43 0.20 0.03 . . Figure 3: Flow diagram for Hybrid Learning Algorithm for Multi-Document Summarization. 7 Conclusion In this paper, we presented a hybrid model for multi-document summarization. We demonstrated that implementation of a summary focused hierarchical topic model to discover sentence structures as well as construction of a discriminative method for inference can benefit summarization quality on manual and automatic evaluation metrics. Acknowledgement Research supported in part by ONR N00014-02-10294, BT Grant CT1080028046, Azerbaijan Ministry of Communications and Information Technology Grant, Azerbaijan University of Azerbaijan Republic and the BISC Program of UC Berkeley. References R. Barzilay and L. Lee. Catching the drift: Probabilistic content models with applications to generation and summarization. In In Proc. HLTNAACL’04, 2004. D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. Hierarchical topic models and the nested chinese restaurant process. In In Neural Information Processing Systems [NIPS], 2003a. D. Blei, T. Griffiths, and M. Jordan. The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. In Journal of ACM, 2009. D. M. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. In Jrnl. Machine Learning Research, 3:993-1022, 2003b. S.R.K. Branavan, H. Chen, J. Eisenstein, and R. Barzilay. Learning document-level semantic properties from free-text annotations. In Journal of Artificial Intelligence Research, volume 34, 2009. J.M. Conroy, J.D. Schlesinger, and D.P. O’Leary. Topic focused multi-cument summarization using an approximate oracle score. In In Proc. ACL’06, 2006. H. Daum´eIII and D. Marcu. Bayesian query focused summarization. In Proc. ACL-06, 2006. H. Drucker, C.J.C. Burger, L. Kaufman, A. Smola, and V. Vapnik. Support vector regression machines. In NIPS 9, 1997. A. Haghighi and L. Vanderwende. Exploring content models for multi-document summarization. In NAACL HLT-09, 2009. T. Joachims. Making large-scale svm learning practical. In In Advances in Kernel Methods Support Vector Learning. MIT Press., 1999. C.-Y. Lin. Rouge: A package for automatic evaluation of summaries. In In Proc. ACL Workshop on Text Summarization Branches Out, 2004. 823 C.-Y. Lin and E.H. Hovy. Automatic evaluation of summaries using n-gram co-occurance statistics. In Proc. HLT-NAACL, Edmonton, Canada, 2003. C. Manning and H. Schuetze. Foundations of statistical natural language processing. In MIT Press. Cambridge, MA, 1999. A. Nenkova and L. Vanderwende. The impact of frequency on summarization. In Tech. Report MSR-TR-2005-101, Microsoft Research, Redwood, Washington, 2005. D.R. Radev, H. Jing, M. Stys, and D. Tam. Centroid-based summarization for multiple documents. In In Int. Jrnl. Information Processing and Management, 2004. D. Shen, J.T. Sun, H. Li, Q. Yang, and Z. Chen. Document summarization using conditional random fields. In Proc. IJCAI’07, 2007. J. Tang, L. Yao, and D. Chens. Multi-topic based query-oriented summarization. In SIAM International Conference Data Mining, 2009. I. Titov and R. McDonald. A joint model of text and aspect ratings for sentiment summarization. In ACL-08:HLT, 2008. K. Toutanova, C. Brockett, M. Gamon, J. Jagarlamudi, H. Suzuki, and L. Vanderwende. The phthy summarization system: Microsoft research at duc 2007. In Proc. DUC, 2007. J.Y. Yeh, H.-R. Ke, W.P. Yang, and I-H. Meng. Text summarization using a trainable summarizer and latent semantic analysis. In Information Processing and Management, 2005. 824
2010
84
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 825–833, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Improving Statistical Machine Translation with Monolingual Collocation Zhanyi Liu1, Haifeng Wang2, Hua Wu2, Sheng Li1 1Harbin Institute of Technology, Harbin, China 2Baidu.com Inc., Beijing, China [email protected] {wanghaifeng, wu_hua}@baidu.com [email protected] Abstract This paper proposes to use monolingual collocations to improve Statistical Machine Translation (SMT). We make use of the collocation probabilities, which are estimated from monolingual corpora, in two aspects, namely improving word alignment for various kinds of SMT systems and improving phrase table for phrase-based SMT. The experimental results show that our method improves the performance of both word alignment and translation quality significantly. As compared to baseline systems, we achieve absolute improvements of 2.40 BLEU score on a phrase-based SMT system and 1.76 BLEU score on a parsing-based SMT system. 1 Introduction Statistical bilingual word alignment (Brown et al. 1993) is the base of most SMT systems. As compared to single-word alignment, multi-word alignment is more difficult to be identified. Although many methods were proposed to improve the quality of word alignments (Wu, 1997; Och and Ney, 2000; Marcu and Wong, 2002; Cherry and Lin, 2003; Liu et al., 2005; Huang, 2009), the correlation of the words in multi-word alignments is not fully considered. In phrase-based SMT (Koehn et al., 2003), the phrase boundary is usually determined based on the bi-directional word alignments. But as far as we know, few previous studies exploit the collocation relations of the words in a phrase. Some This work was partially done at Toshiba (China) Research and Development Center. researches used soft syntactic constraints to predict whether source phrase can be translated together (Marton and Resnik, 2008; Xiong et al., 2009). However, the constraints were learned from the parsed corpus, which is not available for many languages. In this paper, we propose to use monolingual collocations to improve SMT. We first identify potentially collocated words and estimate collocation probabilities from monolingual corpora using a Monolingual Word Alignment (MWA) method (Liu et al., 2009), which does not need any additional resource or linguistic preprocessing, and which outperforms previous methods on the same experimental data. Then the collocation information is employed to improve Bilingual Word Alignment (BWA) for various kinds of SMT systems and to improve phrase table for phrase-based SMT. To improve BWA, we re-estimate the alignment probabilities by using the collocation probabilities of words in the same cept. A cept is the set of source words that are connected to the same target word (Brown et al., 1993). An alignment between a source multi-word cept and a target word is a many-to-one multi-word alignment. To improve phrase table, we calculate phrase collocation probabilities based on word collocation probabilities. Then the phrase collocation probabilities are used as additional features in phrase-based SMT systems. The evaluation results show that the proposed method in this paper significantly improves multi-word alignment, achieving an absolute error rate reduction of 29%. The alignment improvement results in an improvement of 2.16 BLEU score on phrase-based SMT system and an improvement of 1.76 BLEU score on parsing-based SMT system. If we use phrase collocation probabilities as additional features, the phrase-based 825 SMT performance is further improved by 0.24 BLEU score. The paper is organized as follows: In section 2, we introduce the collocation model based on the MWA method. In section 3 and 4, we show how to improve the BWA method and the phrase table using collocation models respectively. We describe the experimental results in section 5, 6 and 7. Lastly, we conclude in section 8. 2 Collocation Model Collocation is generally defined as a group of words that occur together more often than by chance (McKeown and Radev, 2000). A collocation is composed of two words occurring as either a consecutive word sequence or an interrupted word sequence in sentences, such as "by accident" or "take ... advice". In this paper, we use the MWA method (Liu et al., 2009) for collocation extraction. This method adapts the bilingual word alignment algorithm to monolingual scenario to extract collocations only from monolingual corpora. And the experimental results in (Liu et al., 2009) showed that this method achieved higher precision and recall than previous methods on the same experimental data. 2.1 Monolingual word alignment The monolingual corpus is first replicated to generate a parallel corpus, where each sentence pair consists of two identical sentences in the same language. Then the monolingual word alignment algorithm is employed to align the potentially collocated words in the monolingual sentences. According to Liu et al. (2009), we employ the MWA Model 3 (corresponding to IBM Model 3) to calculate the probability of the monolingual word alignment sequence, as shown in Eq. (1).        l j j a j l i i i l a j d w w t w n S A S p j 1 1 3 Model MWA ) , | ( ) | ( ) | ( ) | , (  (1) Where l w S 1  is a monolingual sentence, i denotes the number of words that are aligned with iw . Since a word never collocates with itself, the alignment set is denoted as } & ] ,1[ |) , {( i a l i a i A i i    . Three kinds of probabilities are involved in this model: word collocation probability ) | ( j a j w w t , position collocation probability ) , | ( l a j d j and fertility probability ) | ( i i w n  . In the MWA method, the similar algorithm to bilingual word alignment is used to estimate the parameters of the models, except that a word cannot be aligned to itself. Figure 1 shows an example of the potentially collocated word pairs aligned by the MWA method. Figure 1. MWA Example 2.2 Collocation probability Given the monolingual word aligned corpus, we calculate the frequency of two words aligned in the corpus, denoted as ) , ( j i w w freq . We filtered the aligned words occurring only once. Then the probability for each aligned word pair is estimated as follows:     w j j i j i w w freq w w freq w w p ) , ( ) , ( ) | ( (2)     w i j i i j w w freq w w freq w w p ) , ( ) , ( ) | ( (3) In this paper, the words of collocation are symmetric and we do not determine which word is the head and which word is the modifier. Thus, the collocation probability of two words is defined as the average of both probabilities, as in Eq. (4). 2 ) | ( ) | ( ) , ( i j j i j i w w p w w p w w r   (4) If we have multiple monolingual corpora to estimate the collocation probabilities, we interpolate the probabilities as shown in Eq. (5). ) , ( ) , ( j i k k k j i w w r w w r    (5) k  denotes the interpolation coefficient for the probabilities estimated on the kth corpus. 3 Improving Statistical Bilingual Word Alignment We use the collocation information to improve both one-directional and bi-directional bilingual word alignments. The alignment probabilities are re-estimated by using the collocation probabilities of words in the same cept. The team leader plays a key role in the project undertaking. The team leader plays a key role in the project undertaking. 826 3.1 Improving one-directional bilingual word alignment According to the BWA method, given a bilingual sentence pair le E 1  and m f F 1  , the optimal alignment sequence A between E and F can be obtained as in Eq. (6). ) | , ( max arg * E A F p A A  (6) The method is implemented in a series of five models (IBM Models). IBM Model 1 only employs the word translation model to calculate the probabilities of alignments. In IBM Model 2, both the word translation model and position distribution model are used. IBM Model 3, 4 and 5 consider the fertility model in addition to the word translation model and position distribution model. And these three models are similar, except for the word distortion models. One-to-one and many-to-one alignments could be produced by using IBM models. Although the fertility model is used to restrict the number of source words in a cept and the position distortion model is used to describe the correlation of the positions of the source words, the quality of many-to-one alignments is lower than that of one-to-one alignments. Intuitively, the probability of the source words aligned to a target word is not only related to the fertility ability and their relative positions, but also related to lexical tokens of words, such as common phrase or idiom. In this paper, we use the collocation probability of the source words in a cept to measure their correlation strength. Given source words } | { i a f j j  aligned to ie , their collocation probability is calculated as in Eq. (7). )1 ( * ) , ( 2 }) | ({ 1 1 1 ] [ ] [         i i k k g g i k i j j i i f f r i a f r     (7) Here, k if ] [ and g if ] [ denote the th k word and th g word in } | { i a f j j  ; ) , ( ] [ ] [ g i k i f f r denotes the collocation probability of k if ] [ and g if ] [ , as shown in Eq. (4). Thus, the collocation probability of the alignment sequence of a sentence pair can be calculated according to Eq. (8).     l i j j i a f r E A F r 1 }) | ({ ) | , ( (8) Based on maximum entropy framework, we combine the collocation model and the BWA model to calculate the word alignment probability of a sentence pair, as shown in Eq. (9).      ' )) , , ( exp( )) , , ( exp( ) | , ( A i i i i i i r A E F h A E F h E A F p   (9) Here, ) , , ( A E F hi and i denote features and feature weights, respectively. We use two features in this paper, namely alignment probabilities and collocation probabilities. Thus, we obtain the decision rule: }) , , ( { max arg *   i i i A A E F h A  (10) Based on the GIZA++ package 1, we implemented a tool for the improved BWA method. We first train IBM Model 4 and collocation model on bilingual corpus and monolingual corpus respectively. Then we employ the hillclimbing algorithm (Al-Onaizan et al., 1999) to search for the optimal alignment sequence of a given sentence pair, where the score of an alignment sequence is calculated as in Eq. (10). We note that Eq. (8) only deals with many-toone alignments, but the alignment sequence of a sentence pair also includes one-to-one alignments. To calculate the collocation probability of the alignment sequence, we should also consider the collocation probabilities of such one-to-one alignments. To solve this problem, we use the collocation probability of the whole source sentence, ) (F r , as the collocation probability of one-word cept. 3.2 Improving bi-directional bilingual word alignments In word alignment models implemented in GIZA++, only one-to-one and many-to-one word alignment links can be found. Thus, some multiword units cannot be correctly aligned. The symmetrization method is used to effectively overcome this deficiency (Och and Ney, 2003). Bi-directional alignments are generally obtained from source-to-target alignments t s A 2 and targetto-source alignments s tA 2 , using some heuristic rules (Koehn et al., 2005). This method ignores the correlation of the words in the same alignment unit, so an alignment may include many unrelated words2, which influences the performances of SMT systems. 1 http://www.fjoch.com/GIZA++.html 2 In our experiments, a multi-word unit may include up to 40 words. 827 In order to solve the above problem, we incorporate the collocation probabilities into the bidirectional word alignment process. Given alignment sets t s A 2 and s tA 2 . We can obtain the union s t t s t s A A A 2 2    . The source sentence m f1 can be segmented into m cepts m f  1 . The target sentence le1 can also be segmented into l cepts le  1 . The words in the same cept can be a consecutive word sequence or an interrupted word sequence. Finally, the optimal alignments A between m f  1 and le  1 can be obtained from t s A  using the following decision rule. } ) ( ) ( ) , ( { max arg ) , , ( 3 2 1 ) , ( * ' 1 ' 1           A f e j i j i A A m l j i t s f r e r f e p A f e (11) Here, ) ( jf r and ) ( ie r denote the collocation probabilities of the words in the source language and target language respectively, which are calculated by using Eq. (7). ) , ( j i f e p denotes the word translation probability that is calculated according to Eq. (12). i denotes the weights of these probabilities. | | * | | 2 / )) | ( ) | ( ( ) , ( j i e e f f j i f e e f p f e p f e p i j      (12) ) | ( f e p and ) | ( e f p are the source-to-target and target-to-source translation probabilities trained from the word aligned bilingual corpus. 4 Improving Phrase Table Phrase-based SMT system automatically extracts bilingual phrase pairs from the word aligned bilingual corpus. In such a system, an idiomatic expression may be split into several fragments, and the phrases may include irrelevant words. In this paper, we use the collocation probability to measure the possibility of words composing a phrase. For each bilingual phrase pair automatically extracted from word aligned corpus, we calculate the collocation probabilities of source phrase and target phrase respectively, according to Eq. (13). )1 ( * ) , ( 2 ) ( 1 1 1 1        n n w w r w r n i n i j j i n (13) Here, n w1 denotes a phrase with n words; ) , ( j i w w r denotes the collocation probability of a Corpora Chinese words English words Bilingual corpus 6.3M 8.5M Additional monolingual corpora 312M 203M Table 1. Statistics of training data word pair calculated according to Eq. (4). For the phrase only including one word, we set a fixed collocation probability that is the average of the collocation probabilities of the sentences on a development set. These collocation probabilities are incorporated into the phrase-based SMT system as features. 5 Experiments on Word Alignment 5.1 Experimental settings We use a bilingual corpus, FBIS (LDC2003E14), to train the IBM models. To train the collocation models, besides the monolingual parts of FBIS, we also employ some other larger Chinese and English monolingual corpora, namely, Chinese Gigaword (LDC2007T38), English Gigaword (LDC2007T07), UN corpus (LDC2004E12), Sinorama corpus (LDC2005T10), as shown in Table 1. Using these corpora, we got three kinds of collocation models: CM-1: the training data is the additional monolingual corpora; CM-2: the training data is either side of the bilingual corpus; CM-3: the interpolation of CM-1 and CM-2. To investigate the quality of the generated word alignments, we randomly selected a subset from the bilingual corpus as test set, including 500 sentence pairs. Then word alignments in the subset were manually labeled, referring to the guideline of the Chinese-to-English alignment (LDC2006E93), but we made some modifications for the guideline. For example, if a preposition appears after a verb as a phrase aligned to one single word in the corresponding sentence, then they are glued together. There are several different evaluation metrics for word alignment (Ahrenberg et al., 2000). We use precision (P), recall (R) and alignment error ratio (AER), which are similar to those in Och and Ney (2000), except that we consider each alignment as a sure link. 828 Experiments Single word alignments Multi-word alignments P R AER P R AER Baseline 0.77 0.45 0.43 0.23 0.71 0.65 Improved BWA methods CM-1 0.70 0.50 0.42 0.35 0.86 0.50 CM-2 0.73 0.48 0.42 0.36 0.89 0.49 CM-3 0.73 0.48 0.41 0.39 0.78 0.47 Table 2. English-to-Chinese word alignment results Figure 2. Example of the English-to-Chinese word alignments generated by the BWA method and the improved BWA method using CM-3. " " denotes the alignments of our method; " " denotes the alignments of the baseline method. | | | | g r g S S S P   (14) | | | | r r g S S S R   (15) | | | | | |* 2 1 r g r g S S S S AER     (16) Where, g S and r S denote the automatically generated alignments and the reference alignments. In order to tune the interpolation coefficients in Eq. (5) and the weights of the probabilities in Eq. (11), we also manually labeled a development set including 100 sentence pairs, in the same manner as the test set. By minimizing the AER on the development set, the interpolation coefficients of the collocation probabilities on CM-1 and CM-2 were set to 0.1 and 0.9. And the weights of probabilities were set as 6.0 1   , 2.0 2   and 2.0 3   . 5.2 Evaluation results One-directional alignment results To train a Chinese-to-English SMT system, we need to perform both Chinese-to-English and English-to-Chinese word alignment. We only evaluate the English-to-Chinese word alignment here. GIZA++ with the default settings is used as the baseline method. The evaluation results in Table 2 indicate that the performances of our methods on single word alignments are close to that of the baseline method. For multi-word alignments, our methods significantly outperform the baseline method in terms of both precision and recall, achieving up to 18% absolute error rate reduction. Although the size of the bilingual corpus is much smaller than that of additional monolingual corpora, our methods using CM-1 and CM-2 achieve comparable performances. It is because CM-2 and the BWA model are derived from the same resource. By interpolating CM1 and CM2, i.e. CM-3, the error rate of multi-word alignment results is further reduced. Figure 2 shows an example of word alignment results generated by the baseline method and the improved method using CM-3. In this example, our method successfully identifies many-to-one alignments such as "the people of the world 世人". In our collocation model, the collocation probability of "the people of the world" is much higher than that of "people world". And our method is also effective to prevent the unrelated 中国 的 科学技术 研究 取得 了 许多 令 世人 瞩目 的 成就 。 China's science and technology research has made achievements which have gained the attention of the people of the world . 中国 的 科学技术 研究 取得 了 许多 令 世人 瞩目 的 成就 。 zhong-guo de ke-xue-ji-shu yan-jiu qu-de le xu-duo ling shi-ren zhu-mu de cheng-jiu . china DE science and research obtain LE many let common attract DE achievement . technology people attention 829 Experiments Single word alignments Multi-word alignments All alignments P R AER P R AER P R AER Baseline 0.84 0.43 0.42 0.18 0.74 0.70 0.52 0.45 0.51 Our methods WA-1 0.80 0.51 0.37 0.30 0.89 0.55 0.58 0.51 0.45 WA-2 0.81 0.50 0.37 0.33 0.81 0.52 0.62 0.50 0.44 WA-3 0.78 0.56 0.34 0.44 0.88 0.41 0.63 0.54 0.40 Table 3. Bi-directional word alignment results words from being aligned. For example, in the baseline alignment "has made ... have 取得", "have" and "has" are unrelated to the target word, while our method only generated "made 取 得", this is because that the collocation probabilities of "has/have" and "made" are much lower than that of the whole source sentence. Bi-directional alignment results We build a bi-directional alignment baseline in two steps: (1) GIZA++ is used to obtain the source-to-target and target-to-source alignments; (2) the bi-directional alignments are generated by using "grow-diag-final". We use the methods proposed in section 3 to replace the corresponding steps in the baseline method. We evaluate three methods: WA-1: one-directional alignment method proposed in section 3.1 and grow-diag-final; WA-2: GIZA++ and the bi-directional bilingual word alignments method proposed in section 3.2; WA-3: both methods proposed in section 3. Here, CM-3 is used in our methods. The results are shown in Table 3. We can see that WA-1 achieves lower alignment error rate as compared to the baseline method, since the performance of the improved onedirectional alignment method is better than that of GIZA++. This result indicates that improving one-directional word alignment results in bidirectional word alignment improvement. The results also show that the AER of WA-2 is lower than that of the baseline. This is because the proposed bi-directional alignment method can effectively recognize the correct alignments from the alignment union, by leveraging collocation probabilities of the words in the same cept. Our method using both methods proposed in section 3 produces the best alignment performance, achieving 11% absolute error rate reduction. Experiments BLEU (%) Baseline 29.62 Our methods WA-1 CM-1 30.85 CM-2 31.28 CM-3 31.48 WA-2 CM-1 31.00 CM-2 31.33 CM-3 31.51 WA-3 CM-1 31.43 CM-2 31.62 CM-3 31.78 Table 4. Performances of Moses using the different bi-directional word alignments (Significantly better than baseline with p < 0.01) 6 Experiments on Phrase-Based SMT 6.1 Experimental settings We use FBIS corpus to train the Chinese-toEnglish SMT systems. Moses (Koehn et al., 2007) is used as the baseline phrase-based SMT system. We use SRI language modeling toolkit (Stolcke, 2002) to train a 5-gram language model on the English sentences of FBIS corpus. We used the NIST MT-2002 set as the development set and the NIST MT-2004 test set as the test set. And Koehn's implementation of minimum error rate training (Och, 2003) is used to tune the feature weights on the development set. We use BLEU (Papineni et al., 2002) as evaluation metrics. We also calculate the statistical significance differences between our methods and the baseline method by using paired bootstrap re-sample method (Koehn, 2004). 6.2 Effect of improved word alignment on phrase-based SMT We investigate the effectiveness of the improved word alignments on the phrase-based SMT system. The bi-directional alignments are obtained 830 Figure 3. Example of the translations generated by the baseline system and the system where the phrase collocation probabilities are added Experiments BLEU (%) Moses 29.62 + Phrase collocation probability 30.47 + Improved word alignments + Phrase collocation probability 32.02 Table 5. Performances of Moses employing our proposed methods (Significantly better than baseline with p < 0.01) using the same methods as those shown in Table 3. Here, we investigate three different collocation models for translation quality improvement. The results are shown in Table 4. From the results of Table 4, it can be seen that the systems using the improved bi-directional alignments achieve higher quality of translation than the baseline system. If the same alignment method is used, the systems using CM-3 got the highest BLEU scores. And if the same collocation model is used, the systems using WA-3 achieved the higher scores. These results are consistent with the evaluations of word alignments as shown in Tables 2 and 3. 6.3 Effect of phrase collocation probabilities To investigate the effectiveness of the method proposed in section 4, we only use the collocation model CM-3 as described in section 5.1. The results are shown in Table 5. When the phrase collocation probabilities are incorporated into the SMT system, the translation quality is improved, achieving an absolute improvement of 0.85 BLEU score. This result indicates that the collocation probabilities of phrases are useful in determining the boundary of phrase and predicting whether phrases should be translated together, which helps to improve the phrase-based SMT performance. Figure 3 shows an example: T1 is generated by the system where the phrase collocation probabilities are used and T2 is generated by the baseline system. In this example, since the collocation probability of "出 问题" is much higher than that of "问题 。", our method tends to split "出 问题 。" into "(出 问题) (。)", rather than "(出) (问题 。)". For the phrase "才能 避免" in the source sentence, the collocation probability of the translation "in order to avoid" is higher than that of the translation "can we avoid". Thus, our method selects the former as the translation. Although the phrase "我们 必须 采取 有效 措 施" in the source sentence has the same translation "We must adopt effective measures", our method splits this phrase into two parts "我们 必 须" and "采取 有效 措施", because two parts have higher collocation probabilities than the whole phrase. We also investigate the performance of the system employing both the word alignment improvement and phrase table improvement methods. From the results in Table 5, it can be seen that the quality of translation is future improved. As compared with the baseline system, an absolute improvement of 2.40 BLEU score is achieved. And this result is also better than the results shown in Table 4. 7 Experiments on Parsing-Based SMT We also investigate the effectiveness of the improved word alignments on the parsing-based SMT system, Joshua (Li et al., 2009). In this system, the Hiero-style SCFG model is used (Chiang, 2007), without syntactic information. The rules are extracted only based on the FBIS corpus, where words are aligned by "MW-3 & CM-3". And the language model is the same as that in Moses. The feature weights are tuned on the development set using the minimum error 我们 必须 采取 有效 措施 才能 避免 出 问题 。 wo-men bi-xu cai-qu you-xiao cuo-shi cai-neng bi-mian chu wen-ti . we must use effective measure can avoid out problem . We must adopt effective measures in order to avoid problems . We must adopt effective measures can we avoid out of the question . T1: T2: 831 Experiments BLEU (%) Joshua 30.05 + Improved word alignments 31.81 Table 6. Performances of Joshua using the different word alignments (Significantly better than baseline with p < 0.01) rate training method. We use the same evaluation measure as described in section 6.1. The translation results on Joshua are shown in Table 6. The system using the improved word alignments achieves an absolute improvement of 1.76 BLEU score, which indicates that the improvements of word alignments are also effective to improve the performance of the parsing-based SMT systems. 8 Conclusion We presented a novel method to use monolingual collocations to improve SMT. We first used the MWA method to identify potentially collocated words and estimate collocation probabilities only from monolingual corpora, no additional resource or linguistic preprocessing is needed. Then the collocation information was employed to improve BWA for various kinds of SMT systems and to improve phrase table for phrasebased SMT. To improve BWA, we re-estimate the alignment probabilities by using the collocation probabilities of words in the same cept. To improve phrase table, we calculate phrase collocation probabilities based on word collocation probabilities. Then the phrase collocation probabilities are used as additional features in phrase-based SMT systems. The evaluation results showed that the proposed method significantly improved word alignment, achieving an absolute error rate reduction of 29% on multi-word alignment. The improved word alignment results in an improvement of 2.16 BLEU score on a phrase-based SMT system and an improvement of 1.76 BLEU score on a parsing-based SMT system. When we also used phrase collocation probabilities as additional features, the phrase-based SMT performance is finally improved by 2.40 BLEU score as compared with the baseline system. Reference Lars Ahrenberg, Magnus Merkel, Anna Sagvall Hein, and Jorg Tiedemann. 2000. Evaluation of Word Alignment Systems. In Proceedings of the Second International Conference on Language Resources and Evaluation, pp. 1255-1261. Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz-Josef Och, David Purdy, Noah A. Smith, and David Yarowsky. 1999. Statistical Machine Translation. Final Report. In Johns Hopkins University Workshop. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter estimation. Computational Linguistics, 19(2): 263-311. Colin Cherry and Dekang Lin. 2003. A Probability Model to Improve Word Alignment. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 88-95. David Chiang. 2007. Hierarchical Phrase-Based Translation. Computational Linguistics, 33(2): 201-228. Fei Huang. 2009. Confidence Measure for Word Alignment. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pp. 932940. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 388-395. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In Processings of the International Workshop on Spoken Language Translation 2005. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical Phrase-based Translation. In Proceedings of the Human Language Technology Conference and the North American Association for Computational Linguistics, pp. 127-133. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the ACL, Poster and Demonstration Sessions, pp. 177180. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, and Omar Zaidan. 2009. Demonstration of Joshua: An Open Source Toolkit for Parsing-based Machine Translation. In Proceedings of the 47th Annual Meeting of the As832 sociation for Computational Linguistics, Software Demonstrations, pp. 25-28. Yang Liu, Qun Liu, and Shouxun Lin. Log-linear Models for Word Alignment. 2005. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pp. 459-466. Zhanyi Liu, Haifeng Wang, Hua Wu, and Sheng Li. 2009. Collocation Extraction Using Monolingual Word Alignment Method. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pp. 487-495. Daniel Marcu and William Wong. 2002. A PhraseBased, Joint Probability Model for Statistical Machine Translation. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pp. 133-139. Yuval Marton and Philip Resnik. 2008. Soft Syntactic Constraints for Hierarchical Phrase-Based Translation. In Proceedings of the 46st Annual Meeting of the Association for Computational Linguistics, pp. 1003-1011. Kathleen R. McKeown and Dragomir R. Radev. 2000. Collocations. In Robert Dale, Hermann Moisl, and Harold Somers (Ed.), A Handbook of Natural Language Processing, pp. 507-523. Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pp. 440-447. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 160-167. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1): 19-52. Kishore Papineni, Salim Roukos, Todd Ward, and Weijing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th annual meeting of the Association for Computational Linguistics, pp. 311-318. Andreas Stolcke. 2002. SRILM - An Extensible Language Modeling Toolkit. In Proceedings for the International Conference on Spoken Language Processing, pp. 901-904. Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3): 377-403. Deyi Xiong, Min Zhang, Aiti Aw, and Haizhou Li. 2009. A Syntax-Driven Bracketing Model for Phrase-Based Translation. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP, pp. 315-323. 833
2010
85
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 834–843, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Bilingual Sense Similarity for Statistical Machine Translation Boxing Chen, George Foster and Roland Kuhn National Research Council Canada 283 Alexandre-Taché Boulevard, Gatineau (Québec), Canada J8X 3X7 {Boxing.Chen, George.Foster, Roland.Kuhn}@nrc.ca Abstract This paper proposes new algorithms to compute the sense similarity between two units (words, phrases, rules, etc.) from parallel corpora. The sense similarity scores are computed by using the vector space model. We then apply the algorithms to statistical machine translation by computing the sense similarity between the source and target side of translation rule pairs. Similarity scores are used as additional features of the translation model to improve translation performance. Significant improvements are obtained over a state-of-the-art hierarchical phrase-based machine translation system. 1 Introduction The sense of a term can generally be inferred from its context. The underlying idea is that a term is characterized by the contexts it co-occurs with. This is also well known as the Distributional Hypothesis (Harris, 1954): terms occurring in similar contexts tend to have similar meanings. There has been a lot of work to compute the sense similarity between terms based on their distribution in a corpus, such as (Hindle, 1990; Lund and Burgess, 1996; Landauer and Dumais, 1997; Lin, 1998; Turney, 2001; Pantel and Lin, 2002; Pado and Lapata, 2007). In the work just cited, a common procedure is followed. Given two terms to be compared, one first extracts various features for each term from their contexts in a corpus and forms a vector space model (VSM); then, one computes their similarity by using similarity functions. The features include words within a surface window of a fixed size (Lund and Burgess, 1996), grammatical dependencies (Lin, 1998; Pantel and Lin 2002; Pado and Lapata, 2007), etc. The similarity function which has been most widely used is cosine distance (Salton and McGill, 1983); other similarity functions include Euclidean distance, City Block distance (Bullinaria and Levy; 2007), and Dice and Jaccard coefficients (Frakes and Baeza-Yates, 1992), etc. Measures of monolingual sense similarity have been widely used in many applications, such as synonym recognizing (Landauer and Dumais, 1997), word clustering (Pantel and Lin 2002), word sense disambiguation (Yuret and Yatbaz 2009), etc. Use of the vector space model to compute sense similarity has also been adapted to the multilingual condition, based on the assumption that two terms with similar meanings often occur in comparable contexts across languages. Fung (1998) and Rapp (1999) adopted VSM for the application of extracting translation pairs from comparable or even unrelated corpora. The vectors in different languages are first mapped to a common space using an initial bilingual dictionary, and then compared. However, there is no previous work that uses the VSM to compute sense similarity for terms from parallel corpora. The sense similarities, i.e. the translation probabilities in a translation model, for units from parallel corpora are mainly based on the co-occurrence counts of the two units. Therefore, questions emerge: how good is the sense similarity computed via VSM for two units from parallel corpora? Is it useful for multilingual applications, such as statistical machine translation (SMT)? In this paper, we try to answer these questions, focusing on sense similarity applied to the SMT task. For this task, translation rules are heuristically extracted from automatically word-aligned sentence pairs. Due to noise in the training corpus or wrong word alignment, the source and target sides of some rules are not semantically equivalent, as can be seen from the following 834 real examples which are taken from the rule table built on our training data (Section 5.1): 世界 上 X 之一 ||| one of X (*) 世界 上 X 之一 ||| one of X in the world 许多 市民 ||| many citizens 许多 市民 ||| many hong kong residents (*) The source and target sides of the rules with (*) at the end are not semantically equivalent; it seems likely that measuring the semantic similarity from their context between the source and target sides of rules might be helpful to machine translation. In this work, we first propose new algorithms to compute the sense similarity between two units (unit here includes word, phrase, rule, etc.) in different languages by using their contexts. Second, we use the sense similarities between the source and target sides of a translation rule to improve statistical machine translation performance. This work attempts to measure directly the sense similarity for units from different languages by comparing their contexts1. Our contribution includes proposing new bilingual sense similarity algorithms and applying them to machine translation. We chose a hierarchical phrase-based SMT system as our baseline; thus, the units involved in computation of sense similarities are hierarchical rules. 2 Hierarchical phrase-based MT system The hierarchical phrase-based translation method (Chiang, 2005; Chiang, 2007) is a formal syntaxbased translation modeling method; its translation model is a weighted synchronous context free grammar (SCFG). No explicit linguistic syntactic information appears in the model. An SCFG rule has the following form: ~ , ,γ α → X where X is a non-terminal symbol shared by all the rules; each rule has at most two nonterminals. α (γ ) is a source (target) string consisting of terminal and non-terminal symbols. ~ defines a one-to-one correspondence between non-terminals in α and γ . 1 There has been a lot of work (more details in Section 7) on applying word sense disambiguation (WSD) techniques in SMT for translation selection. However, WSD techniques for SMT do so indirectly, using source-side context to help select a particular translation for a source rule. source target Ini. phr. 他 出席 了 会议 he attended the meeting Rule 1 Context 1 他 出席 了 X1 会议 he attended X1 the, meeting Rule 2 Context 2 会议 他, 出席, 了 the meeting he, attended Rule 3 Context 3 他X1会议 出席, 了 he X1 the meeting attended Rule 4 Context 4 出席 了 他,会议 attended he, the, meeting Figure 1: example of hierarchical rule pairs and their context features. Rule frequencies are counted during rule extraction over word-aligned sentence pairs, and they are normalized to estimate features on rules. Following (Chiang, 2005; Chiang, 2007), 4 features are computed for each rule: • ) | ( α γ P and ) | ( γ α P are direct and inverse rule-based conditional probabilities; • ) | ( α γ w P and ) | ( γ α w P are direct and inverse lexical weights (Koehn et al., 2003). Empirically, this method has yielded better performance on language pairs such as ChineseEnglish than the phrase-based method because it permits phrases with gaps; it generalizes the normal phrase-based models in a way that allows long-distance reordering (Chiang, 2005; Chiang, 2007). We use the Joshua implementation of the method for decoding (Li et al., 2009). 3 Bag-of-Words Vector Space Model To compute the sense similarity via VSM, we follow the previous work (Lin, 1998) and represent the source and target side of a rule by feature vectors. In our work, each feature corresponds to a context word which co-occurs with the translation rule. 3.1 Context Features In the hierarchical phrase-based translation method, the translation rules are extracted by abstracting some words from an initial phrase pair (Chiang, 2005). Consider a rule with nonterminals on the source and target side; for a given instance of the rule (a particular phrase pair in the training corpus), the context will be the words instantiating the non-terminals. In turn, the context for the sub-phrases that instantiate the non-terminals will be the words in the remainder of the phrase pair. For example in Figure 1, if we 835 have an initial phrase pair 他 出席 了 会议 ||| he attended the meeting, and we extract four rules from this initial phrase: 他 出席 了 X1 ||| he attended X1, 会议 ||| the meeting, 他X1会议 ||| he X1 the meeting, and 出席 了 ||| attended. Therefore, the and meeting are context features of target pattern he attended X1; he and attended are the context features of the meeting; attended is the context feature of he X1 the meeting; also he, the and meeting are the context feature of attended (in each case, there are also source-side context features). 3.2 Bag-of-Words Model For each side of a translation rule pair, its context words are all collected from the training data, and two “bags-of-words” which consist of collections of source and target context words cooccurring with the rule’s source and target sides are created. } ,..., , { } ,..., , { 2 1 2 1 J e I f e e e B f f f B = = (1) where ) 1( I i fi ≤ ≤ are source context words which co-occur with the source side of rule α , and ) 1( J j e j ≤ ≤ are target context words which co-occur with the target side of rule γ . Therefore, we can represent source and target sides of the rule by vectors fvv and evv as in Equation (2): } ,..., , { } ,..., , { 2 1 2 1 J I e e e e f f f f w w w v w w w v = = v v (2) where if w and je w are values for each source and target context feature; normally, these values are based on the counts of the words in the corresponding bags. 3.3 Feature Weighting Schemes We use pointwise mutual information (Church et al., 1990) to compute the feature values. Let c ( f B c∈ or e B c∈ ) be a context word and ) , ( c r F be the frequency count of a rule r (α or γ ) co-occurring with the context word c. The pointwise mutual information ) , ( c r MI is defined as: N c F N r F N c r F c r MI c r w ) ( log ) ( log ) , ( log ) , ( ) , ( × = = (3) where N is the total frequency counts of all rules and their context words. Since we are using this value as a weight, following (Turney, 2001), we drop log, N and ) (r F . Thus (3) simplifies to: ) ( ) , ( ) , ( c F c r F c r w = (4) It can be seen as an estimate of ) | ( c r P , the empirical probability of observing r given c. A problem with ) | ( c r P is that it is biased towards infrequent words/features. We therefore smooth ) , ( c r w with add-k smoothing: kR c F k c r F k c r F k c r F c r w R i i + + = + + = ∑ = ) ( ) , ( ) ) , ( ( ) , ( ) , ( 1 (5) where k is a tunable global smoothing constant, and R is the number of rules. 4 Similarity Functions There are many possibilities for calculating similarities between bags-of-words in different languages. We consider IBM model 1 probabilities and cosine distance similarity functions. 4.1 IBM Model 1 Probabilities For the IBM model 1 similarity function, we take the geometric mean of symmetrized conditional IBM model 1 (Brown et al., 1993) bag probabilities, as in Equation (6). )) | ( ) | ( ( ) , ( f e e f B B P B B P sqrt sim ⋅ = γ α (6) To compute ) | ( e f B B P , IBM model 1 assumes that all source words are conditionally independent, so that: ∏ = = I i e i e f B f p B B P 1 ) | ( ) | ( (7) To compute, we use a “Noisy-OR” combination which has shown better performance than standard IBM model 1 probability, as described in (Zens and Ney, 2004): ) | ( 1 ) | ( e i e i B f p B f p − = (8) ∏ = − − ≈ J j j i e i e f p B f p 1 )) | ( 1( 1 ) | ( (9) where ) | ( e i B f p is the probability that if is not in the translation of e B , and is the IBM model 1 probability. 4.2 Vector Space Mapping A common way to calculate semantic similarity is by vector space cosine distance; we will also 836 use this similarity function in our algorithm. However, the two vectors in Equation (2) cannot be directly compared because the axes of their spaces represent different words in different languages, and also their dimensions I and J are not assured to be the same. Therefore, we need to first map a vector into the space of the other vector, so that the similarity can be calculated. Fung (1998) and Rapp (1999) map the vector onedimension-to-one-dimension (a context word is a dimension in each vector space) from one language to another language via an initial bilingual dictionary. We follow (Zhao et al., 2004) to do vector space mapping. Our goal is – given a source pattern – to distinguish between the senses of its associated target patterns. Therefore, we map all vectors in target language into the vector space in the source language. What we want is a representation avv in the source language space of the target vector evv . To get avv , we can let if a w , the weight of the ith source feature, be a linear combination over target features. That is to say, given a source feature weight for fi, each target feature weight is linked to it with some probability. So that we can calculate a transformed vector from the target vectors by calculating weights if a w using a translation lexicon: ∑ = = J j e j i f a j i w e f w 1 ) | Pr( (10) where ) | ( j i e f p is a lexical probability (we use IBM model 1 probability). Now the source vector and the mapped vector avv have the same dimensions as shown in (11): } ,..., , { } ,..., , { 2 1 2 1 I I f a f a f a a f f f f w w w v w w w v = = v v (11) 4.3 Naïve Cosine Distance Similarity The standard cosine distance is defined as the inner product of the two vectors fvv and avv normalized by their norms. Based on Equation (10) and (11), it is easy to derive the similarity as follows: ) ( ) ( ) | Pr( | | | | ) , cos( ) , ( 1 2 1 2 1 1 ∑ ∑ ∑∑ = = = = = ⋅ ⋅ = = I i f a I I f I i J j e j i f a f a f a f i i j i w sqrt w sqrt w e f w v v v v v v sim v v v v v v γ α (12) where I and J are the number of the words in source and target bag-of-words; if w and j e w are values of source and target features; if a w is the transformed weight mapped from all target features to the source dimension at word fi. 4.4 Improved Similarity Function To incorporate more information than the original similarity functions – IBM model 1 probabilities in Equation (6) and naïve cosine distance similarity function in Equation (12) – we refine the similarity function and propose a new algorithm. As shown in Figure 2, suppose that we have a rule pair ) , ( γ α . full f C and full e C are the contexts extracted according to the definition in section 3 from the full training data for α and for γ , respectively. cooc f C and cooc e C are the contexts for α and γ when α and γ co-occur. Obviously, they satisfy the constraints: full f cooc f C C ⊆ and full e cooc e C C ⊆ . Therefore, the original similarity functions are to compare the two context vectors built on full training data directly, as shown in Equation (13). ) , ( ) , ( full e full f C C sim sim = γ α (13) Then, we propose a new similarity function as follows: 3 2 1 ) , ( ) , ( ) , ( ) , ( λ λ λ γ α cooc e full e cooc e cooc f cooc f full f C C sim C C sim C C sim sim ⋅ ⋅ = (14) where the parameters iλ (i=1,2,3) can be tuned via minimal error rate training (MERT) (Och, 2003). Figure 2: contexts for rule α and γ . A unit’s sense is defined by all its contexts in the whole training data; it may have a lot of different senses in the whole training data. However, when it is linked with another unit in the other language, its sense pool is constrained and is just α γ full f C cooc f C full e C cooc e C 837 a subset of the whole sense set. ) , ( cooc f full f C C sim is the metric which evaluates the similarity between the whole sense pool of α and the sense pool when α co-occurs with γ ; ) , ( cooc e full e C C sim is the analogous similarity metric for γ . They range from 0 to 1. These two metrics both evaluate the similarity for two vectors in the same language, so using cosine distance to compute the similarity is straightforward. And we can set a relatively large size for the vector, since it is not necessary to do vector mapping as the vectors are in the same language. ) , ( cooc e cooc f C C sim computes the similarity between the context vectors when α and γ co-occur. We may compute ) , ( cooc e cooc f C C sim by using IBM model 1 probability and cosine distance similarity functions as Equation (6) and (12). Therefore, on top of the degree of bilingual semantic similarity between a source and a target translation unit, we have also incorporated the monolingual semantic similarity between all occurrences of a source or target unit, and that unit’s occurrence as part of the given rule, into the sense similarity measure. 5 Experiments We evaluate the algorithm of bilingual sense similarity via machine translation. The sense similarity scores are used as feature functions in the translation model. 5.1 Data We evaluated with different language pairs: Chinese-to-English, and German-to-English. For Chinese-to-English tasks, we carried out the experiments in two data conditions. The first one is the large data condition, based on training data for the NIST 2 2009 evaluation Chinese-toEnglish track. In particular, all the allowed bilingual corpora except the UN corpus and Hong Kong Hansard corpus have been used for estimating the translation model. The second one is the small data condition where only the FBIS3 corpus is used to train the translation model. We trained two language models: the first one is a 4gram LM which is estimated on the target side of the texts used in the large data condition. The second LM is a 5-gram LM trained on the so 2 http://www.nist.gov/speech/tests/mt 3 LDC2003E14 called English Gigaword corpus. Both language models are used for both tasks. We carried out experiments for translating Chinese to English. We use the same development and test sets for the two data conditions. We first created a development set which used mainly data from the NIST 2005 test set, and also some balanced-genre web-text from the NIST training material. Evaluation was performed on the NIST 2006 and 2008 test sets. Table 1 gives figures for training, development and test corpora; |S| is the number of the sentences, and |W| is the number of running words. Four references are provided for all dev and test sets. Chi Eng Parallel Train Large Data |S| 3,322K |W| 64.2M 62.6M Small Data |S| 245K |W| 9.0M 10.5M Dev |S| 1,506 1,506×4 Test NIST06 |S| 1,664 1,664×4 NIST08 |S| 1,357 1,357×4 Gigaword |S| - 11.7M Table 1: Statistics of training, dev, and test sets for Chinese-to-English task. For German-to-English tasks, we used WMT 20064 data sets. The parallel training data contains 21 million target words; both the dev set and test set contain 2000 sentences; one reference is provided for each source input sentence. Only the target-language half of the parallel training data are used to train the language model in this task. 5.2 Results For the baseline, we train the translation model by following (Chiang, 2005; Chiang, 2007) and our decoder is Joshua5, an open-source hierarchical phrase-based machine translation system written in Java. Our evaluation metric is IBM BLEU (Papineni et al., 2002), which performs case-insensitive matching of n-grams up to n = 4. Following (Koehn, 2004), we use the bootstrapresampling test to do significance testing. By observing the results on dev set in the additional experiments, we first set the smoothing constant k in Equation (5) to 0.5. Then, we need to set the sizes of the vectors to balance the computing time and translation accu 4 http://www.statmt.org/wmt06/ 5 http://www.cs.jhu.edu/~ccb/joshua/index.html 838 racy, i.e., we keep only the top N context words with the highest feature value for each side of a rule 6 . In the following, we use “Alg1” to represent the original similarity functions which compare the two context vectors built on full training data, as in Equation (13); while we use “Alg2” to represent the improved similarity as in Equation (14). “IBM” represents IBM model 1 probabilities, and “COS” represents cosine distance similarity function. After carrying out a series of additional experiments on the small data condition and observing the results on the dev set, we set the size of the vector to 500 for Alg1; while for Alg2, we set the sizes of full f C and full e C N1 to 1000, and the sizes of cooc f C and cooc e C N2 to 100. The sizes of the vectors in Alg2 are set in the following process: first, we set N2 to 500 and let N1 range from 500 to 3,000, we observed that the dev set got best performance when N1 was 1000; then we set N1 to 1000 and let N1 range from 50 to 1000, we got best performance when N1 =100. We use this setting as the default setting in all remaining experiments. Algorithm NIST’06 NIST’08 Baseline 27.4 21.2 Alg1 IBM 27.8* 21.5 Alg1 COS 27.8* 21.5 Alg2 IBM 27.9* 21.6* Alg2 COS 28.1** 21.7* Table 2: Results (BLEU%) of small data Chinese-toEnglish NIST task. Alg1 represents the original similarity functions as in Equation (13); while Alg2 represents the improved similarity as in Equation (14). IBM represents IBM model 1 probability, and COS represents cosine distance similarity function. * or ** means result is significantly better than the baseline (p < 0.05 or p < 0.01, respectively). Ch-En De-En Algorithm NIST’06 NIST’08 Test’06 Baseline 31.0 23.8 26.9 Alg2 IBM 31.5* 24.5** 27.2* Alg2 COS 31.6** 24.5** 27.3* Table 3: Results (BLEU%) of large data Chinese-toEnglish NIST task and German-to-English WMT task. 6 We have also conducted additional experiments by removing the stop words from the context vectors; however, we did not observe any consistent improvement. So we filter the context vectors by only considering the feature values. Table 2 compares the performance of Alg1 and Alg2 on the Chinese-to-English small data condition. Both Alg1 and Alg2 improved the performance over the baseline, and Alg2 obtained slight and consistent improvements over Alg1. The improved similarity function Alg2 makes it possible to incorporate monolingual semantic similarity on top of the bilingual semantic similarity, thus it may improve the accuracy of the similarity estimate. Alg2 significantly improved the performance over the baseline. The Alg2 cosine similarity function got 0.7 BLEUscore (p<0.01) improvement over the baseline for NIST 2006 test set, and a 0.5 BLEU-score (p<0.05) for NIST 2008 test set. Table 3 reports the performance of Alg2 on Chinese-to-English NIST large data condition and German-to-English WMT task. We can see that IBM model 1 and cosine distance similarity function both obtained significant improvement on all test sets of the two tasks. The two similarity functions obtained comparable results. 6 Analysis and Discussion 6.1 Effect of Single Features In Alg2, the similarity score consists of three parts as in Equation (14): ) , ( cooc f full f C C sim , ) , ( cooc e full e C C sim , and ) , ( cooc e cooc f C C sim ; where ) , ( cooc e cooc f C C sim could be computed by IBM model 1 probabilities ) , ( cooc e cooc f IBM C C sim or cosine distance similarity function ) , ( cooc e cooc f COS C C sim . Therefore, our first study is to determine which one of the above four features has the most impact on the result. Table 4 shows the results obtained by using each of the 4 features. First, we can see that ) , ( cooc e cooc f IBM C C sim always gives a better improvement than ) , ( cooc e cooc f COS C C sim . This is because ) , ( cooc e cooc f IBM C C sim scores are more diverse than the latter when the number of context features is small (there are many rules that have only a few contexts.) For an extreme example, suppose that there is only one context word in each vector of source and target context features, and the translation probability of the two context words is not 0. In this case, ) , ( cooc e cooc f IBM C C sim reflects the translation probability of the context word pair, while ) , ( cooc e cooc f COS C C sim is always 1. Second, ) , ( cooc f full f C C sim and ) , ( cooc e full e C C sim also give some improvements even when used 839 independently. For a possible explanation, consider the following example. The Chinese word “ 红” can translate to “red”, “communist”, or “hong” (the transliteration of 红, when it is used in a person’s name). Since these translations are likely to be associated with very different source contexts, each will have a low ) , ( cooc f full f C C sim score. Another Chinese word 小溪 may translate into synonymous words, such as “brook”, “stream”, and “rivulet”, each of which will have a high ) , ( cooc f full f C C sim score. Clearly, 红 is a more “dangerous” word than 小溪, since choosing the wrong translation for it would be a bad mistake. But if the two words have similar translation distributions, the system cannot distinguish between them. The monolingual similarity scores give it the ability to avoid “dangerous” words, and choose alternatives (such as larger phrase translations) when available. Third, the similarity function of Alg2 consistently achieved further improvement by incorporating the monolingual similarities computed for the source and target side. This confirms the effectiveness of our algorithm. CE_LD CE_SD testset (NIST) ’06 ’08 ’06 ’08 Baseline 31.0 23.8 27.4 21.2 ) , ( cooc f full f C C sim 31.1 24.3 27.5 21.3 ) , ( cooc e full e C C sim 31.1 23.9 27.9 21.5 ) , ( cooc e cooc f IBM C C sim 31.4 24.3 27.9 21.5 ) , ( cooc e cooc f COS C C sim 31.2 23.9 27.7 21.4 Alg2 IBM 31.5 24.5 27.9 21.6 Alg2 COS 31.6 24.5 28.1 21.7 Table 4: Results (BLEU%) of Chinese-to-English large data (CE_LD) and small data (CE_SD) NIST task by applying one feature. 6.2 Effect of Combining the Two Similarities We then combine the two similarity scores by using both of them as features to see if we could obtain further improvement. In practice, we use the four features in Table 4 together. Table 5 reports the results on the small data condition. We observed further improvement on dev set, but failed to get the same improvements on test sets or even lost performance. Since the IBM+COS configuration has one extra feature, it is possible that it overfits the dev set. Algorithm Dev NIST’06 NIST’08 Baseline 20.2 27.4 21.2 Alg2 IBM 20.5 27.9 21.6 Alg2 COS 20.6 28.1 21.7 Alg2 IBM+COS 20.8 27.9 21.5 Table 5: Results (BLEU%) for combination of two similarity scores. Further improvement was only obtained on dev set but not on test sets. 6.3 Comparison with Simple Contextual Features Now, we try to answer the question: can the similarity features computed by the function in Equation (14) be replaced with some other simple features? We did additional experiments on small data Chinese-to-English task to test the following features: (15) and (16) represent the sum of the counts of the context words in Cfull, while (17) represents the proportion of words in the context of α that appeared in the context of the rule ( γ α, ); similarly, (18) is related to the properties of the words in the context of γ . ∑∈ = full f i C f i f f F N ) , ( ) ( α α (15) ∑ ∈ = full e j C e j e e F N ) , ( ) ( γ γ (16) ) ( ) , ( ) , ( α α γ α f C f i f N f F E cooc f i ∑ ∈ = (17) ) ( ) , ( ) , ( γ γ γ α e C e j e N e F E cooc e j ∑ ∈ = (18) where ) , ( if F α and ) , ( je F γ are the frequency counts of rule α or γ co-occurring with the context word if or je respectively. Feature Dev NIST’06 NIST’08 Baseline 20.2 27.4 21.2 +Nf 20.5 27.6 21.4 +Ne 20.5 27.5 21.3 +Ef 20.4 27.5 21.2 +Ee 20.4 27.3 21.2 +Nf+Ne 20.5 27.5 21.3 Table 6: Results (BLEU%) of using simple features based on context on small data NIST task. Some improvements are obtained on dev set, but there was no significant effect on the test sets. Table 6 shows results obtained by adding the above features to the system for the small data 840 condition. Although all these features have obtained some improvements on dev set, there was no significant effect on the test sets. This means simple features based on context, such as the sum of the counts of the context features, are not as helpful as the sense similarity computed by Equation (14). 6.4 Null Context Feature There are two cases where no context word can be extracted according to the definition of context in Section 3.1. The first case is when a rule pair is always a full sentence-pair in the training data. The second case is when for some rule pairs, either their source or target contexts are out of the span limit of the initial phrase, so that we cannot extract contexts for those rule-pairs. For Chinese-to-English NIST task, there are about 1% of the rules that do not have contexts; for German-to-English task, this number is about 0.4%. We assign a uniform number as their bilingual sense similarity score, and this number is tuned through MERT. We call it the null context feature. It is included in all the results reported from Table 2 to Table 6. In Table 7, we show the weight of the null context feature tuned by running MERT in the experiments reported in Section 5.2. We can learn that penalties always discourage using those rules which have no context to be extracted. Alg. Task CE_SD CE_LD DE Alg2 IBM -0.09 -0.37 -0.15 Alg2 COS -0.59 -0.42 -0.36 Table 7: Weight learned for employing the null context feature. CE_SD, CE_LD and DE are Chinese-toEnglish small data task, large data task and Germanto-English task respectively. 6.5 Discussion Our aim in this paper is to characterize the semantic similarity of bilingual hierarchical rules. We can make several observations concerning our features: 1) Rules that are largely syntactic in nature, such as 的 X ||| the X of, will have very diffuse “meanings” and therefore lower similarity scores. It could be that the gains we obtained come simply from biasing the system against such rules. However, the results in table 6 show that this is unlikely to be the case: features that just count context words help very little. 2) In addition to bilingual similarity, Alg2 relies on the degree of monolingual similarity between the sense of a source or target unit within a rule, and the sense of the unit in general. This has a bias in favor of less ambiguous rules, i.e. rules involving only units with closely related meanings. Although this bias is helpful on its own, possibly due to the mechanism we outline in section 6.1, it appears to have a synergistic effect when used along with the bilingual similarity feature. 3) Finally, we note that many of the features we use for capturing similarity, such as the context “the, of” for instantiations of X in the unit the X of, are arguably more syntactic than semantic. Thus, like other “semantic” approaches, ours can be seen as blending syntactic and semantic information. 7 Related Work There has been extensive work on incorporating semantics into SMT. Key papers by Carpuat and Wu (2007) and Chan et al (2007) showed that word-sense disambiguation (WSD) techniques relying on source-language context can be effective in selecting translations in phrase-based and hierarchical SMT. More recent work has aimed at incorporating richer disambiguating features into the SMT log-linear model (Gimpel and Smith, 2008; Chiang et al, 2009); predicting coherent sets of target words rather than individual phrase translations (Bangalore et al, 2009; Mauser et al, 2009); and selecting applicable rules in hierarchical (He et al, 2008) and syntactic (Liu et al, 2008) translation, relying on source as well as target context. Work by Wu and Fung (2009) breaks new ground in attempting to match semantic roles derived from a semantic parser across source and target languages. Our work is different from all the above approaches in that we attempt to discriminate among hierarchical rules based on: 1) the degree of bilingual semantic similarity between source and target translation units; and 2) the monolingual semantic similarity between occurrences of source or target units as part of the given rule, and in general. In another words, WSD explicitly tries to choose a translation given the current source context, while our work rates rule pairs independent of the current context. 8 Conclusions and Future Work In this paper, we have proposed an approach that uses the vector space model to compute the sense 841 similarity for terms from parallel corpora and applied it to statistical machine translation. We saw that the bilingual sense similarity computed by our algorithm led to significant improvements. Therefore, we can answer the questions proposed in Section 1. We have shown that the sense similarity computed between units from parallel corpora by means of our algorithm is helpful for at least one multilingual application: statistical machine translation. Finally, although we described and evaluated bilingual sense similarity algorithms applied to a hierarchical phrase-based system, this method is also suitable for syntax-based MT systems and phrase-based MT systems. The only difference is the definition of the context. For a syntax-based system, the context of a rule could be defined similarly to the way it was defined in the work described above. For a phrase-based system, the context of a phrase could be defined as its surrounding words in a given size window. In our future work, we may try this algorithm on syntax-based MT systems and phrase-based MT systems with different context features. It would also be possible to use this technique during training of an SMT system – for instance, to improve the bilingual word alignment or reduce the training data noise. References S. Bangalore, S. Kanthak, and P. Haffner. 2009. Statistical Machine Translation through Global Lexical Selection and Sentence Reconstruction. In: Goutte et al (ed.), Learning Machine Translation. MIT Press. P. F. Brown, V. J. Della Pietra, S. A. Della Pietra & R. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2) 263-312. J. Bullinaria and J. Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior Research Methods, 39 (3), 510–526. M. Carpuat and D. Wu. 2007. Improving Statistical Machine Translation using Word Sense Disambiguation. In: Proceedings of EMNLP, Prague. M. Carpuat. 2009. One Translation per Discourse. In: Proceedings of NAACL HLT Workshop on Semantic Evaluations, Boulder, CO. Y. Chan, H. Ng and D. Chiang. 2007. Word Sense Disambiguation Improves Statistical Machine Translation. In: Proceedings of ACL, Prague. D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In: Proceedings of ACL, pp. 263–270. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics. 33(2):201–228. D. Chiang, W. Wang and K. Knight. 2009. 11,001 new features for statistical machine translation. In: Proc. NAACL HLT, pp. 218–226. K. W. Church and P. Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22–29. W. B. Frakes and R. Baeza-Yates, editors. 1992. Information Retrieval, Data Structure and Algorithms. Prentice Hall. P. Fung. 1998. A statistical view on bilingual lexicon extraction: From parallel corpora to non-parallel corpora. In: Proceedings of AMTA, pp. 1–17. Oct. Langhorne, PA, USA. J. Gimenez and L. Marquez. 2009. Discriminative Phrase Selection for SMT. In: Goutte et al (ed.), Learning Machine Translation. MIT Press. K. Gimpel and N. A. Smith. 2008. Rich Source-Side Context for Statistical Machine Translation. In: Proceedings of WMT, Columbus, OH. Z. Harris. 1954. Distributional structure. Word, 10(23): 146-162. Z. He, Q. Liu, and S. Lin. 2008. Improving Statistical Machine Translation using Lexicalized Rule Selection. In: Proceedings of COLING, Manchester, UK. D. Hindle. 1990. Noun classification from predicateargument structures. In: Proceedings of ACL. pp. 268-275. Pittsburgh, PA. P. Koehn, F. Och, D. Marcu. 2003. Statistical PhraseBased Translation. In: Proceedings of HLTNAACL. pp. 127-133, Edmonton, Canada P. Koehn. 2004. Statistical significance tests for machine translation evaluation. In: Proceedings of EMNLP, pp. 388–395. July, Barcelona, Spain. T. Landauer and S. T. Dumais. 1997. A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review. 104:211240. Z. Li, C. Callison-Burch, C. Dyer, J. Ganitkevitch, S. Khudanpur, L. Schwartz, W. Thornton, J. Weese and O. Zaidan, 2009. Joshua: An Open Source Toolkit for Parsing-based Machine Translation. In: Proceedings of the WMT. March. Athens, Greece. D. Lin. 1998. Automatic retrieval and clustering of similar words. In: Proceedings of COLING/ACL98. pp. 768-774. Montreal, Canada. 842 Q. Liu, Z. He, Y. Liu and S. Lin. 2008. Maximum Entropy based Rule Selection Model for Syntaxbased Statistical Machine Translation. In: Proceedings of EMNLP, Honolulu, Hawaii. K. Lund, and C. Burgess. 1996. Producing highdimensional semantic spaces from lexical cooccurrence. Behavior Research Methods, Instruments, and Computers, 28 (2), 203–208. A. Mauser, S. Hasan and H. Ney. 2009. Extending Statistical Machine Translation with Discriminative and Trigger-Based Lexicon Models. In: Proceedings of EMNLP, Singapore. F. Och. 2003. Minimum error rate training in statistical machine translation. In: Proceedings of ACL. Sapporo, Japan. S. Pado and M. Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33 (2), 161–199. P. Pantel and D. Lin. 2002. Discovering word senses from text. In: Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 613–619. Edmonton, Canada. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of ACL, pp. 311– 318. July. Philadelphia, PA, USA. R. Rapp. 1999. Automatic Identification of Word Translations from Unrelated English and German Corpora. In: Proceedings of ACL, pp. 519–526. June. Maryland. G. Salton and M. J. McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill, New York. P. Turney. 2001. Mining the Web for synonyms: PMI-IR versus LSA on TOEFL. In: Proceedings of the Twelfth European Conference on Machine Learning, pp. 491–502, Berlin, Germany. D. Wu and P. Fung. 2009. Semantic Roles for SMT: A Hybrid Two-Pass Model. In: Proceedings of NAACL/HLT, Boulder, CO. D. Yuret and M. A. Yatbaz. 2009. The Noisy Channel Model for Unsupervised Word Sense Disambiguation. In: Computational Linguistics. Vol. 1(1) 1-18. R. Zens and H. Ney. 2004. Improvements in phrasebased statistical machine translation. In: Proceedings of NAACL-HLT. Boston, MA. B. Zhao, S. Vogel, M. Eck, and A. Waibel. 2004. Phrase pair rescoring with term weighting for statistical machine translation. In Proceedings of EMNLP, pp. 206–213. July. Barcelona, Spain. 843
2010
86
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 844–853, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Untangling the Cross-Lingual Link Structure of Wikipedia Gerard de Melo Max Planck Institute for Informatics Saarbr¨ucken, Germany [email protected] Gerhard Weikum Max Planck Institute for Informatics Saarbr¨ucken, Germany [email protected] Abstract Wikipedia articles in different languages are connected by interwiki links that are increasingly being recognized as a valuable source of cross-lingual information. Unfortunately, large numbers of links are imprecise or simply wrong. In this paper, techniques to detect such problems are identified. We formalize their removal as an optimization task based on graph repair operations. We then present an algorithm with provable properties that uses linear programming and a region growing technique to tackle this challenge. This allows us to transform Wikipedia into a much more consistent multilingual register of the world’s entities and concepts. 1 Introduction Motivation. The open community-maintained encyclopedia Wikipedia has not only turned the Internet into a more useful and linguistically diverse source of information, but is also increasingly being used in computational applications as a large-scale source of linguistic and encyclopedic knowledge. To allow cross-lingual navigation, Wikipedia offers cross-lingual interwiki links that for instance connect the Indonesian article about Albert Einstein to the corresponding articles in over 100 other languages. Such links are extraordinarily valuable for cross-lingual applications. In the ideal case, a set of articles connected directly or indirectly via such links would all describe the same entity or concept. Due to conceptual drift, different granularities, as well as mistakes made by editors, we frequently find concepts as different as economics and manager in the same connected component. Filtering out inaccurate links enables us to exploit Wikipedia’s multilinguality in a much safer manner and allows us to create a multilingual register of named entities. Contribution. Our research contributions are: 1) We identify criteria to detect inaccurate connections in Wikipedia’s cross-lingual link structure. 2) We formalize the task of removing such links as an optimization problem. 3) We introduce an algorithm that attempts to repair the cross-lingual graph in a minimally invasive way. This algorithm has an approximation guarantee with respect to optimal solutions. 4) We show how this algorithm can be used to combine all editions of Wikipedia into a single large-scale multilingual register of named entities and concepts. 2 Detecting Inaccurate Links In this paper, we model the union of cross-lingual links provided by all editions of Wikipedia as an undirected graph G = (V, E) with edge weights w(e) for e ∈E. In our experiments, we simply honour each individual link equally by defining w(e) = 2 if there are reciprocal links between the two pages, 1 if there is a single link, and 0 otherwise. However, our framework is flexible enough to deal with more advanced weighting schemes, e.g. one could easily plug in cross-lingual measures of semantic relatedness between article texts. It turns out that an astonishing number of connected components in this graph harbour inaccurate links between articles. For instance, the Esperanto article ‘Germana Imperiestro’ is about German emporers and another Esperanto article ‘Germana Imperiestra Regno’ is about the German Empire, but, as of June 2010, both are linked to the English and German articles about the German Empire. Over time, some inaccurate links may be fixed, but in this and in large numbers of other cases, the imprecise connection has persisted for many years. In order to detect such cases, we need to have some way of specifying that two articles are likely to be distinct. 844 Figure 1: Connected component with inaccurate links (simplified) 2.1 Distinctness Assertions Figure 1 shows a connected component that conflates the concept of television as a medium with the concept of TV sets as devices. Among other things, we would like to state that ‘Television’ and ‘T.V.’ are distinct from ‘Television set’ and ‘TV set’. In general, we may have several sets of entities Di,1, . . . , Di,li, for which we assume that any two entities u,v from different sets are pairwise distinct with some degree of confidence or weight. In our example, Di,1 = {‘Television’,‘T.V.’} would be one set, and Di,2 = {‘Television set’,‘TV set’} would be another set, which means that we are assuming ‘Television’, for example, to be distinct from both ‘Television set’ and ‘TV set’. Definition 1. (Distinctness Assertions) Given a set of nodes V , a distinctness assertion is a collection Di = (Di,1, . . . , Di,li) of pairwise disjoint (i.e. Di,j ∩Di,k = ∅for j ̸= k) subsets Di,j ⊂V that expresses that any two nodes u ∈Di,j, v ∈Di,k from different subsets (j ̸= k) are asserted to be distinct from each other with some weight w(Di) ∈R. We found that many components with inaccurate links can be identified automatically with the following distinctness assertions. Criterion 1. (Distinctness between articles from the same Wikipedia edition) For each languagespecific edition of Wikipedia, a separate assertion (Di,1, Di,2, . . . ) can be made, where each Di,j contains an individual article together with its respective redirection pages. Two articles from the same Wikipedia very likely describe distinct concepts unless they are redirects of each other. For example, ‘Georgia (country)’ is distinct from ‘Georgia (U.S. State)’. Additionally, there are also redirects that are clearly marked by a category or template as involving topic drift, e.g. redirects from songs to albums or artists, from products to companies, etc. We keep such redirects in a Di,j distinct from the one of their redirect targets. Criterion 2. (Distinctness between categories from the same Wikipedia edition) For each language-specific edition of Wikipedia, a separate assertion (Di,1, Di,2, . . . ) is made, where each Di,j contains a category page together with any redirects. For instance, ‘Category:Writers’ is distinct from ‘Category:Writing’. Criterion 3. (Distinctness for links with anchor identifiers) The English ‘Division by zero’, for instance, links to the German ‘Null#Division’. The latter is only a part of a larger article about the number zero in general, so we can make a distinctness assertion to separate ‘Division by zero’ from ‘Null’. In general, for each interwiki link or redirection with an anchor identifier, we add an assertion (Di,1, Di,2) where Di,1,Di,2 represent the respective articles without anchor identifiers. These three types of distinctness assertions are instantiated for all articles and categories of all Wikipedia editions. The assertion weights are tunable; the simplest choice is using a uniform weight for all assertions (note that these weights are different from the edge weights in the graph). We will revisit this issue in our experiments. 2.2 Enforcing Consistency Given a graph G representing cross-lingual links between Wikipedia pages, as well as distinctness assertions D1, . . . , Dn with weights w(Di), we may find that nodes that are asserted to be distinct are in the same connected component. We can then try to apply repair operations to reconcile the graph’s link structure with the distinctness asssertions and obtain global consistency. There are two ways to modify the input, and for each we can also consider the corresponding weights as a sort of cost that quantifies how much we are changing the original input: a) Edge cutting: We may remove an edge e ∈ E from the graph, paying cost w(e). b) Distinctness assertion relaxation: We may remove a node v ∈V from a distinctness assertion Di, paying cost w(Di). 845 Removing edges allows us to split connected components into multiple smaller components, thereby ensuring that two nodes asserted to be distinct are no longer connected directly or indirectly. In Figure 1, for instance, we could delete the edge from the Spanish ‘TV set’ article to the Japanese ‘television’ article. In constrast, removing nodes from distinctness assertions means that we decide to give up our claim of them being distinct, instead allowing them to share a connected component. Our reliance on costs is based on the assumption that the link structure or topology of the graph provides the best indication of which cross-lingual links to remove. In Figure 1, we have distinctness assertions between nodes in two densely connected clusters that are tied together only by a single spurious link. In such cases, edge removals can easily yield separate connected components. When, however, the two nodes are strongly connected via many different paths with high weights, we may instead opt for removing one of the two nodes from the distinctness assertion. The aim will be to balance the costs for removing edges from the graph with the costs for removing nodes from distinctness assertions to produce a consistent solution with a minimal total repair cost. We accommodate our knowledge about distinctness while staying as close as possible to what Wikipedia provides as input. This can be formalized as the Weighted Distinctness-Based Graph Separation (WDGS) problem. Let G be an undirected graph with a set of vertices V and a set of edges E weighted by w : E →R. If we use a set C ⊆V to specify which edges we want to cut from the original graph, and sets Ui to specify which nodes we want to remove from distinctness assertions, we can begin by defining WDGS solutions as follows. Definition 2. (WDGS Solution). Given a graph G = (V, E) and n distinctness assertions D1, . . . , Dn, a tuple (C, U1, . . . , Un) is a valid WDGS solution if and only if ∀i, j, k ̸= j, u ∈Di,j \ Ui, v ∈Di,k \ Ui: P(u, v, E \ C) = ∅, i.e. the set of paths from u to v in the graph (V, E \C) is empty. Definition 3. (WDGS Cost). Let w : E →R be a weight function for edges e ∈E, and w(Di) (i = 1 . . . n) be weights for the distinctness assertions. The (total) cost of a WDGS solution S = (C, U1, . . . , Un) is then defined as c(S) = c(C, U1, . . . , Un) = "X e∈C w(e) # + " n X i=1 |Ui| w(Di) # Definition 4. (WDGS). A WDGS problem instance P consists of a graph G = (V, E) with edge weights w(e) and n distinctness assertions D1, . . . , Dn with weights w(Di). The objective consists in finding a solution (C, U1, . . . , Un) with minimal cost c(C, U1, . . . , Un). It turns out that finding optimal solutions efficiently is a hard problem (proofs in Appendix A). Theorem 1. WDGS is NP-hard and APX-hard. If the Unique Games Conjecture (Khot, 2002) holds, then it is NP-hard to approximate WDGS within any constant factor α > 0. 3 Approximation Algorithm Due to the hardness of WDGS, we devise a polynomial-time approximation algorithm with an approximation factor of 4 ln(nq + 1) where n is the number of distinctness assertions and q = max i,j |Di,j|. This means that for all problem instances P, we can guarantee c(S(P)) c(S∗(P)) ≤4 ln(nq + 1), where S(P) is the solution determined by our algorithm, and S∗(P) is an optimal solution. Note that this approximation guarantee is independent of how long each Di is, and that it merely represents an upper bound on the worst case scenario. In practice, the results tend to be much closer to the optimum, as will be shown in Section 4. Our algorithm first solves a linear program (LP) relaxation of the original problem, which gives us hints as to which edges should most likely be cut and which nodes should most likely be removed from distinctness assertions. Note that this is a continuous LP, not an integer linear program (ILP); the latter would not be tractable due to the large number of variables and constraints of the problem. After solving the linear program, a new – extended – graph is constructed and the optimal LP solution is used to define a distance metric on it. The final solution is obtained by smartly selecting regions in this extended graph as the individual output components, employing a region 846 growing technique in the spirit of the seminal work by Leighton and Rao (1999). Edges that cross the boundaries of these regions are cut. Definition 5. Given a WDGS instance, we define a linear program of the following form: minimize P e∈E dew(e) + nP i=1 liP j=1 P v∈Di,j ui,vw(Di) subject to pi,j,v = ui,v ∀i, j<li, v ∈Di,j (1) pi,j,v + ui,v ≥1 ∀i, j<li, v ∈S k>j Di,k (2) pi,j,v ≤pi,j,u + de ∀i, j<li, e=(u,v) ∈E (3) de ≥0 ∀e ∈E (4) ui,v ≥0 ∀i, v ∈ liS j=1 Di,j (5) pi,j,v ≥0 ∀i, j<li, v∈V (6) The LP uses decision variables de and ui,v, and auxiliary variables pi,j,v that we refer to as potential variables. The de variables indicate whether (in the continuous LP: to what degree) an edge e should be deleted, and the ui,v variables indicate whether (to what degree) v should be removed from a distinctness assertion Di. The LP objective function corresponds to Definition 3, aiming to minimize the total costs. A potential variable pi,j,v reflects a sort of potential difference between an assertion Di,j and a node v. If pi,j,v = 0, then v is still connected to nodes in Di,j. Constraints (1) and (2) enforce potential differences between Di,j and all nodes in Di,k with k > j. For instance, for distinctness between ‘New York City’ and ‘New York’ (the state), they might require ‘New York’ to have a potential of 1, while ‘New York City’ has a potential of 0. The potential variables are tied to the deletion variables de for edges in Constraint (3) as well as to the ui,v in Constraints (1) and (2). This means that the potential difference pi,j,v + ui,v ≥1 can only be obtained if edges are deleted on every path between ‘New York City’ and ‘New York’, or if at least one of these two nodes is removed from the distinctness assertion (by setting the corresponding ui,v to non-zero values). Constraints (4), (5), (6) ensure non-negativity. Having solved the linear program, the next major step is to convert the optimal LP solution into the final – discrete – solution. We cannot rely on standard rounding methods to turn the optimal fractional values of the de and ui,v variables into a valid solution. Often, all solution variables have small values and rounding will merely produce an empty (C, U1, . . . , Un) = (∅, ∅, . . . , ∅). Instead, a more sophisticated technique is necessary. The optimal solution of the LP can be used to define an extended graph G′ with a distance metric d between nodes. The algorithm then operates on this graph, in each iteration selecting regions that become output components and removing them from the graph. A simple example is shown in Figure 2. The extended graph contains additional nodes and edges representing distinctness assertions. Cutting one of these additional edges corresponds to removing a node from a distinctness assertion. Definition 6. Given G = (V, E) and distinctness assertions D1, . . . , Dn with weights w(Di), we define an undirected graph G′ = (V ′, E′) where V ′ = V ∪{vi,v | i = 1 . . . n, w(Di) > 0, v ∈S j Di,j}, E′ = {e ∈E | w(e) > 0} ∪ {(v, vi,v) | v ∈Di,j, w(Di) > 0}. We accordingly extend the definition of w(e) to additionally cover the new edges by defining w(e) = w(Di) for e = (v, vi,v). We also extend it for sets S of edges by defining w(S) = P e∈S w(e). Finally, we define a node distance metric d(u, v) =                      0 u = v de (u, v) ∈E ui,v u = vi,v ui,u v = vi,u min p∈ P(u,v,E′) P (u′,v′) ∈p d(u′, v′) otherwise, where P(u, v, E′) denotes the set of acyclic paths between two nodes in E′. We further fix ˆcf = X (u,v)∈E′ d(u, v) w(e) as the weight of the fractional solution of the LP (ˆcf is a constant based on the original E′, irrespective of later modifications to the graph). Definition 7. Around a given node v in G′, we consider regions R(v, r) ⊆V with radius r. The cut C(v, r) of a given region is defined as the set of edges in G′ with one endpoint within the region and one outside the region: R(v, r) = {v′ ∈V ′ | d(v, v′) ≤r} C(v, r) = {e ∈E′ | |e ∩R(v, r)| = 1} For sets of nodes S ⊆V , we define R(S, r) = S v∈S R(v, r) and C(S, r) = S v∈S C(v, r). 847 Figure 2: Extended graph with two added nodes v1,u, v1,v representing distinctness between ‘Televisi´on’ and ‘Televisor’, and a region around v1,u that would cut the link from the Japanese ‘Television’ to ‘Televisor’ Definition 8. Given q = max i,j |Di,j|, we approximate the optimal cost of regions as: ˆc(v, r) = X e=(u,u′)∈E′: e⊆R(v,r) d(u, u′) w(e) (1) + X e∈C(v,r) v′∈e∩R(v,r) (r −d(v, v′)) w(e) ˆc(S, r) = 1 nq ˆcf + X v∈S ˆc(v, r) (2) The first summand accounts for the edges entirely within the region, and the second one accounts for the edges in C(v, r) to the extent that they are within the radius. The definition of ˆc(S, r) contains an additional slack component that is required for the approximation guarantee proof. Based on these definitions, Algorithm 3.1 uses the LP solution to construct the extended graph. It then repeatedly, as long as there is an unsatisfied assertion Di, chooses a set S of nodes containing one node from each relevant Di,j. Around the nodes in S it simultaneously grows |S| regions with the same radius, a technique previously suggested by Avidor and Langberg (2007). These regions are essentially output components that determine the solution. Repeatedly choosing the radius that minimizes w(C(S,r)) ˆc(S,r) allows us to obtain the approximation guarantee, because the distances in this extended graph are based on the solution of the LP. The properties of this algorithm are given by the following two theorems (proofs in Appendix A). Theorem 2. The algorithm yields a valid WDGS solution (C, U1, . . . , Un). Theorem 3. The algorithm yields a solution (C, U1, . . . , Un) with an approximation factor of 4 ln(nq + 1) with respect to the cost of the optimal WDGS solution (C∗, U∗ 1 , . . . , U∗ n), where n is the number of distinctness assertions and q = max i,j |Di,j|. This solution can be obtained in polynomial time. 4 Results 4.1 Wikipedia We downloaded February 2010 XML dumps of all available editions of Wikipedia, in total 272 editions that amount to 86.5 GB uncompressed. From these dumps we produced two datasets. Dataset A captures cross-lingual interwiki links between pages, in total 77.07 million undirected edges (146.76 million original links). Dataset B additionally includes 2.2 million redirect-based edges. Wikipedia deals with interwiki links to redirects transparently, however there are many redirects with titles that do not co-refer, e.g. redirects from members of a band to the band, or from aspects of a topic to the topic in general. We only included redirects in the following cases: • the titles of redirect and redirect target match after Unicode NFKD normalization, diacritics removal, case conversion, and removal of punctuation characters • the redirect uses certain templates or categories that indicate co-reference with the target (alternative names, abbreviations, etc.) We treated them like reciprocal interwiki links by assigning them a weight of 2. 4.2 Application of Algorithm The choice of distinctness assertion weights depends on how lenient we wish to be towards conceptual drift, allowing us to opt for more fine- or more coarse-grained distinctions. In our experiments, we decided to prefer fine-grained conceptual distinctions, and settled on a weight of 100. We analysed over 20 million connected components in each dataset, checking for distinctness assertions. For the roughly 110,000 connected components with relevant distinctness assertions, 848 Algorithm 3.1 WDGS Approximation Algorithm 1: procedure SELECT(V, E, V ′, E′, w, D1, . . . , Dn, l1, . . . , ln) 2: solve linear program given by Definition 5 ▷determine optimal fractional solution 3: construct G′ = (V ′, E′) ▷extended graph (Definition 6) 4: C ←{e ∈E | w(e) = 0} ▷cut zero-weighted edges 5: Ui ← li−1 S j=1 Di,j ∀i : w(Di) = 0 ▷remove zero-weighted Di 6: while ∃i, j, k > j, u ∈Di,j, v ∈Di,k : P(vi,u, vi,v, E′) ̸= ∅do ▷find unsatisfied assertion 7: S ←∅ ▷set of nodes around which regions will be grown 8: for all j in 1 . . . li −1 do ▷arbitrarily choose node from each Di,j 9: if ∃v ∈Di,j : vi,v ∈V ′ then S ←S ∪vi,v 10: D ←{d(u, v) ≤1 2 | u ∈S, v ∈V ′} ∪{ 1 2} ▷set of distances 11: choose ϵ such that ∀d, d′ ∈D : 0 < ϵ ≪|d −d′| ▷infinitesimally small 12: r ← argmin r=d−ϵ: d∈D\{0} w(C(S, r)) ˆc(S, r) ▷choose optimal radius (ties broken arbitrarily) 13: V ′ ←V ′ \ R(S, r) ▷remove regions from G′ 14: E′ ←{e ∈E′ | e ⊆V ′} 15: C ←C ∪(C(S, r) ∩E) ▷update global solution 16: for all i′ in 1 . . . n do 17: Ui′ ←Ui′ ∪{v | (vi′,v, v) ∈C(S, r)} 18: for all j in 1 . . . li′ do Di′,j ←Di′,j ∩V ′ ▷prune distinctness assertions 19: return (C, U1, . . . , Un) we applied our algorithm, relying on the commercial CPLEX tool to solve the linear programs. In most cases, the LP solving took less than a second, however the LP sizes grow exponentially with the number of nodes and hence the time complexity increases similarly. In about 300 cases per dataset, CPLEX took too long and was automatically killed or the linear program was a priori deemed too large to complete in a short amount of time. For these cases, we adopted an alternative strategy described later on. Table 1 provides the experimental results for the two datasets. Dataset B is more connected and thus has fewer connected components with more pairs of nodes asserted to be distinct by distinctness assertions. The LP given by Definition 5 provides fractional solutions that constitute lower bounds on the optimal solution (cf. also Lemma 5 in Appendix A), so the optimal solution cannot have a cost lower than the fractional LP solution. Table 1 shows that in practice, our algorithm achieves near-optimal results. 4.3 Linguistic Adequacy The near-optimal results of our algorithm apply with respect to our problem formalization, which aims at repairing the graph in a minimally invaTable 1: Algorithm Results Dataset A Dataset B Connected components 23,356,027 21,161,631 – with distinctness assertions 112,857 113,714 – algorithm applied successfully 112,580 113,387 Distinctness assertions 380,694 379,724 Node pairs considered distinct 916,554 1,047,299 Lower bound on optimal cost 1,255,111 1,245,004 Cost of our solution 1,306,747 1,294,196 Factor 1.04 1.04 Edges to be deleted (undirected) 1,209,798 1,199,181 Nodes to be merged 603 573 sive way. It may happen, however, that the graph’s topology is misleading, and that in a specific case deleting many cross-lingual links to separate two entities is more appropriate than looking for a conservative way to separate them. This led us 849 to study the linguistic adequacy. Two annotators evaluated 200 randomly selected separated pairs from Dataset A consisting of an English and a German article, with an inter-annotator agreement (Cohen κ) of 0.656. Examples are given in Table 2. We obtained a precision of 87.97% ± 0.04% (Wilson score interval) against the consensus annotation. Many of the errors are the result of articles having many inaccurate outgoing links, in which case they may be assigned to the wrong component. In other cases, we noted duplicate articles in Wikipedia. Occasionally, we also observed differences in scope, where one article would actually describe two related concepts in a single page. Our algorithm will then either make a somewhat arbitrary assignment to the component of either the first or second concept, or the broader generalization of the two concepts becomes a separate, more general connected component. 4.4 Large Problem Instances When problem instances become too large, the linear programs can become too unwieldy for linear optimization software to cope with on current hardware. In such cases, the graphs tend to be very sparsely connected, consisting of many smaller, more densely connected subgraphs. We thus investigated graph partitioning heuristics to decompose larger graphs into smaller parts that can more easily be handled with our algorithm. The METIS algorithms (Karypis and Kumar, 1998) can decompose graphs with hundreds of thousands of nodes almost instantly, but favour equally sized clusters over lower cut costs. We obtained partitionings with costs orders of magnitude lower using the heuristic by Dhillon et al. (2007). 4.5 Database of Named Entities The partitioning heuristics allowed us to process all entries in the complete set of Wikipedia dumps and produce a clean output set of connected components where each Wikipedia article or category belongs to a connected component consisting of pages about the same entity or concept. We can regard these connected components as equivalence classes. This means that we obtain a large-scale multilingual database of named entities and their translations. We are also able to more safely transfer information cross-lingually between editions. For example, when an article a has a category c in the French Wikipedia, we can suggest the corresponding Indonesian category for the corresponding Indonesian article. Moreover, we believe that this database will help extend resources like DBPedia and YAGO that to date have exclusively used the English Wikipedia as their repository of entities and classes. With YAGO’s category heuristics, even entirely non-English connected components can be assigned a class in WordNet as long as at least one of the relevant categories has an English page. So, the French Wikipedia article on the Dutch schooner ‘JR Tolkien’, despite the lack of a corresponding English article, can be assigned to the WordNet synset for ‘ship’. Using YAGO’s plural heuristic to distinguish classes (Einstein is a physicist) from topic descriptors (Einstein belongs to the topic physics), we determined that over 4.8 million connected components can be linked to WordNet, greatly surpassing the 3.2 million articles covered by the English Wikipedia alone. 5 Related Work A number of projects have used Wikipedia as a database of named entities (Ponzetto and Strube, 2007; Silberer et al., 2008). The most wellknown are probably DBpedia (Auer et al., 2007), which serves as a hub in the Linked Data Web, Freebase1, which combines human input and automatic extractors, and YAGO (Suchanek et al., 2007), which adds an ontological structure on top of Wikipedia’s entities. Wikipedia has been used cross-lingually for cross-lingual IR (Nguyen et al., 2009), question answering (Ferr´andez et al., 2007) as well as for learning transliterations (Pasternack and Roth, 2009), among other things. Mihalcea and Csomai (2007) have studied predicting new links within a single edition of Wikipedia. Sorg and Cimiano (2008) considered the problem of suggesting new cross-lingual links, which could be used as additional inputs in our problem. Adar et al. (2009) and Bouma et al. (2009) show how cross-lingual links can be used to propagate information from one Wikipedia’s infoboxes to another edition. Our aggregation consistency algorithm uses theoretical ideas put forward by researchers studying graph cuts (Leighton and Rao, 1999; Garg et al., 1996; Avidor and Langberg, 2007). Our problem setting is related to that of correlation clustering (Bansal et al., 2004), where a graph consist1http://www.freebase.com/ 850 Table 2: Examples of separated concepts English concept German concept (translated) Explanation Coffee percolator French Press different types of brewing devices Baqa-Jatt Baqa al-Gharbiyye Baqa-Jatt is a city resulting from a merger of Baqa al-Gharbiyye and Jatt Leucothoe (plant) Leucothea (Orchamos) the second refers to a figure of Greek mythology Old Belarusian language Ruthenian language the second is often considered slightly broader ing of positively and negatively labelled similarity edges is clustered such that similar items are grouped together, however our approach is much more generic than conventional correlation clustering. Charikar et al. (2005) studied a variation of correlation clustering that is similar to WDGS, but since a negative edge would have to be added between each relevant pair of entities in a distinctness assertion, the approximation guarantee would only be O(log(n |V |2)). Minimally invasive repair operations on graphs have also been studied for graph similarity computation (Zeng et al., 2009), where two graphs are provided as input. 6 Conclusions and Future Work We have presented an algorithmic framework for the problem of co-reference that produces consistent partitions by intelligently removing edges or allowing nodes to remain connected. This algorithm has successfully been applied to Wikipedia’s cross-lingual graph, where we identified and eliminated surprisingly large numbers of inaccurate connections, leading to a large-scale multilingual register of names. In future work, we would like to investigate how our algorithm behaves in extended settings, e.g. we can use heuristics to connect isolated, unconnected articles to likely candidates in other Wikipedias using weighted edges. This can be extended to include mappings from multiple languages to WordNet synsets, with the hope that the weights and link structure will then allow the algorithm to make the final disambiguation decision. Additional scenarios include dealing with co-reference on the Linked Data Web or mappings between thesauri. As such resources are increasingly being linked to Wikipedia and DBpedia, we believe that our techniques will prove useful in making mappings more consistent. A Proofs Proof (Theorem 1). We shall reduce the minimum multicut problem to WDGS. The hardness claims then follow from Chawla et al. (2005). Given a graph G = (V, E) with a positive cost c(e) for each e ∈E, and a set D = {(si, ti) | i = 1 . . . k} of k demand pairs, our goal is to find a multicut M with respect to D with minimum total cost P e∈M c(e). We convert each demand pair (si, ti) into a distinctness assertion Di = ({si}, {ti}) with weight w(Di) = 1+P e∈E c(e). An optimal WDGS solution (C, U1, . . . , Uk) with cost c then implies a multicut C with the same weight, because each w(Di) > P e∈E c(e), so all demand pairs will be satisfied. C is a minimal multicut because any multicut C′ with lower cost would imply a valid WDGS solution (C′, ∅, . . . , ∅) with a cost lower than the optimal one, which is a contradiction. Lemma 4. The linear program given by Definition 5 enforces that for any i,j,k ̸= j,u ∈Di,j, v ∈Di,k, and any path v0, . . . , vt with v0 = u, vt = v we obtain ui,u+Pt−1 l=0 d(vl,vl+1)+ui,v ≥1. The integer linear program obtained by augmenting Definition 5 with integer constraints de, ui,v, pi,j,v ∈{0, 1} (for all applicable e, i, j, v) produces optimal solutions (C, U1, . . . , Uk) for WDGS problems, obtained as C = ({e ∈E | de = 1}, Ui = {v | ui,v = 1}. Proof. Without loss of generality, let us assume that j < k. The LP constraints give us pi,j,vt ≤ pi,j,vt−1 +d(vt−1,vt), . . . , pi,j,v1 ≤pi,j,v0 +d(v0,v1), as well as pi,j,v0 = ui,u and pi,j,vt + ui,v ≥1. Hence 1 ≤pi,j,vt +ui,v ≤ui,u+Pt−1 l=0 d(vl,vl+1)+ ui,v. With added integrality constraints, we obtain either u ∈Ui, v ∈Ui, or at least one edge along any path from u to v is cut, i.e. P(u, v, E \ C) = ∅. 851 This proves that any ILP solution enduces a valid WDGS solution (Definition 2). Clearly, the integer program’s objective function minimizes c(C, U1, . . . , Un) (Definition 3) if C = ({e ∈E | de = 1}, Ui = {v | ui,v = 1}. To see that the solutions are optimal, it thus suffices to observe that any optimal WDGS solution (C∗, U∗ 1 , . . . , U∗ n) yields a feasible ILP solution de = IC∗(e), ui,v = IU∗ i (v). Proof (Theorem 2). ri < 1 2 holds for any radius ri chosen by the algorithm, so for any region R(v0, r) grown around a node v0, and any two nodes u, v within that region, the triangle inequality gives us d(u, v) ≤d(u, v0) + d(v0, v) < 1 2 + 1 2 = 1 (maximal distance condition). At the same time, by Lemma 4 and Definition 6 for any u ∈Di,j, v ∈Di,k (j ̸= k), we obtain d(vi,u, vi,v) = d(vi,u, u) + d(u, v) + d(v, vi,v) ≥ 1. With the maximal distance condition above, this means that vi,u and vi,v cannot be in the same region. Hence u, v cannot be in the same region, unless the edge from vi,u to u is cut (in which case u will be placed in Ui) or the edge from v to vi,v is cut (in which case v will be placed in Ui). Since each region is separated from other regions via C, we obtain that ∀i, j, k ̸= j, u, v: u ∈Di,j \ Ui, v ∈Di,k \ Ui implies P(u, v, E \ C) = ∅, so a valid solution is obtained. Lemma 5 (essentially due to Garg et al. (1996)). For any i where ∃j, k > j, u ∈Di,j, v ∈Di,k : P(vi,u, vi,v, E′) ̸= ∅and w(Di) > 0, there exists an r such that w(C(S, r)) ≤2 ln(nq + 1) ˆc(S, r), 0 ≤r < 1 2 for any set S consisting of vi,v nodes. Proof. Define w(S, r) = P v∈S w(C(v, r)). We will prove that there exists an appropriate r with w(C(S, r)) ≤w(S, r) ≤2 ln(nq+1) ˆc(S, r). Assume, for reductio ad absurdum, that ∀r ∈[0, 1 2) : w(S, r) > 2 ln(nq + 1)ˆc(S, r). As we expand the radius r, we note that ˆc(S, r) d dr = w(S, r) whereever ˆc is differentiable with respect to r. There are only a finite number of points r1,...,rl−1 in (0, 1 2) where this is not the case (namely, when ∃u ∈S, v ∈V ′ : d(u, v) = ri). Also note that ˆc increases monotonically for increasing values of r, and that it is universally greater than zero (since there is a path between vi,u, vi,v). Set r0 = 0, rl = 1 2 and choose ϵ such that 0 < ϵ ≪ min{rj+1 −rj | j < l}. Our assumption then implies: lP j=1 R rj−ϵ rj−1+ϵ w(S,r) ˆc(S,r) dr > " lP j=1 rj −rj−1 −2ϵ # 2 ln(nq + 1) lP j=1 ln ˆc(S, rj −ϵ) −ln ˆc(S, rj−1 + ϵ) > 1 2 −2lϵ  2 ln(nq + 1) ln ˆc(S, 1 2 −ϵ) −ln ˆc(S, 0) > (1 −4lϵ) ln(nq + 1) ˆc(S, 1 2 −ϵ) ˆc(S,0) > (nq + 1)1−4lϵ ˆc(S, 1 2 −ϵ) > (nq + 1)1−4lϵˆc(S, 0) For small ϵ, the right term can get arbitrarily close to (nq +1)ˆc(S, 0) ≥ˆcf + ˆc(S, 0), which is strictly larger than ˆc(S, 1 2 −ϵ) no matter how small ϵ becomes, so the initial assumption is false. Proof (Theorem 3). Let Si, ri denote the set S and radius r chosen in particular iterations, and ci the corresponding costs incurred: ci = w(C(Si, r) ∩E) + |Ui|w(Di) = w(C(Di, r)). Note that any ri chosen by the algorithm will in fact fulfil the criterion described by Lemma 5, because ri is chosen to minimize the ratio between the two terms, and the minimizing r ∈[0, 1 2) must be among the r considered by the algorithm (w(C(Di, r)) only changes at one of those points, so the minimum is reached by approaching the points from the left). Hence, we obtain ci ≤2 ln(n + 1)ˆc(Si, ri). For our global solution, note that there is no overlap between the regions chosen within an iteration, since regions have a radius strictly smaller than 1 2, while vi,u, vi,v for u ∈Di,j, v ∈Di,k, j ̸= k have a distance of at least 1. Nor is there any overlap between regions from different iterations, because in each iteration the selected regions are removed from G′. Globally, we therefore obtain c(C, U1, . . . , Un) = P i ci < 2 ln(nq + 1) P i ˆc(Si, ri) ≤2 ln(nq + 1)2ˆcf (observe that i ≤nq). Since ˆcf is the objective score for the fractional LP relaxation solution of the WDGS ILP (Lemma 4), we obtain ˆcf ≤ c(C∗, U∗ 1 , . . . , U∗ n), and thus c(C, U1, . . . , Un) < 4 ln(n + 1)c(C∗, U∗ 1 , . . . , U∗ n). To obtain a solution in polynomial time, note that the LP size is polynomial with respect to nq and may be solved using a polynomial algorithm (Karmarkar, 1984). The subsequent steps run in O(nq) iterations, each growing up to |V | regions using O(|V |2) uniform cost searches. 852 References Eytan Adar, Michael Skinner, and Daniel S. Weld. 2009. Information arbitrage across multi-lingual Wikipedia. In Ricardo A. Baeza-Yates, Paolo Boldi, Berthier A. Ribeiro-Neto, and Berkant Barla Cambazoglu, editors, Proceedings of the 2nd International Conference on Web Search and Web Data Mining, WSDM 2009, pages 94–103. ACM. S¨oren Auer, Chris Bizer, Jens Lehmann, Georgi Kobilarov, Richard Cyganiak, and Zachary Ives. 2007. DBpedia: a nucleus for a web of open data. In Aberer et al., editor, The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11–15, 2007, Lecture Notes in Computer Science 4825. Springer. Adi Avidor and Michael Langberg. 2007. The multimultiway cut problem. Theoretical Computer Science, 377(1-3):35–42. Nikhil Bansal, Avrim Blum, and Shuchi Chawla. 2004. Correlation clustering. Machine Learning, 56(13):89–113. Gosse Bouma, Sergio Duarte, and Zahurul Islam. 2009. Cross-lingual alignment and completion of Wikipedia templates. In CLIAWS3 ’09: Proceedings of the Third International Workshop on Cross Lingual Information Access, pages 21–29, Morristown, NJ, USA. Association for Computational Linguistics. Moses Charikar, Venkatesan Guruswami, and Anthony Wirth. 2005. Clustering with qualitative information. Journal of Computer and System Sciences, 71(3):360–383. Shuchi Chawla, Robert Krauthgamer, Ravi Kumar, Yuval Rabani, and D. Sivakumar. 2005. On the hardness of approximating multicut and sparsest-cut. In In Proceedings of the 20th Annual IEEE Conference on Computational Complexity, pages 144–153. Inderjit S. Dhillon, Yuqiang Guan, and Brian Kulis. 2007. Weighted graph cuts without eigenvectors. a multilevel approach. IEEE Trans. Pattern Anal. Mach. Intell., 29(11):1944–1957. Sergio Ferr´andez, Antonio Toral, ´Oscar Ferr´andez, Antonio Ferr´andez, and Rafael Mu˜noz. 2007. Applying Wikipedia’s multilingual knowledge to crosslingual question answering. In NLDB, pages 352– 363. Naveen Garg, Vijay V. Vazirani, and Mihalis Yannakakis. 1996. Approximate max-flow min(multi)cut theorems and their applications. SIAM Journal on Computing (SICOMP), 25:698–707. Narendra Karmarkar. 1984. A new polynomial-time algorithm for linear programming. In STOC ’84: Proceedings of the 16th Annual ACM Symposium on Theory of Computing, pages 302–311, New York, NY, USA. ACM. George Karypis and Vipin Kumar. 1998. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on Scientific Computing, 20(1):359–392. Subhash Khot. 2002. On the power of unique 2-prover 1-round games. In STOC ’02: Proceedings of the 34th Annual ACM Symposium on Theory of Computing, pages 767–775, New York, NY, USA. ACM. Tom Leighton and Satish Rao. 1999. Multicommodity max-flow min-cut theorems and their use in designing approximation algorithms. Journal of the ACM, 46(6):787–832. Rada Mihalcea and Andras Csomai. 2007. Wikify!: Linking documents to encyclopedic knowledge. In Proceedings of the 16th ACM Conference on Information and Knowledge Management (CIKM 2007), pages 233–242, New York, NY, USA. ACM. D. Nguyen, A. Overwijk, C. Hauff, R.B. Trieschnigg, D. Hiemstra, and F.M.G. Jong de. 2009. WikiTranslate: query translation for cross-lingual information retrieval using only Wikipedia. In Carol Peters, Thomas Deselaers, Nicola Ferro, and Julio Gonzalo, editors, Evaluating Systems for Multilingual and Multimodal Information Access, Lecture Notes in Computer Science 5706, pages 58–65. Jeff Pasternack and Dan Roth. 2009. Learning better transliterations. In CIKM ’09: Proceeding of the 18th ACM Conference on Information and Knowledge Management, pages 177–186, New York, NY, USA. ACM. Simone Paolo Ponzetto and Michael Strube. 2007. Deriving a large scale taxonomy from Wikipedia. In AAAI 2007: Proceedings of the 22nd Conference on Artificial Intelligence, pages 1440–1445. AAAI Press. Carina Silberer, Wolodja Wentland, Johannes Knopp, and Matthias Hartung. 2008. Building a multilingual lexical resource for named entity disambiguation, translation and transliteration. In European, editor, Proceedings of the Sixth International Language Resources and Evaluation (LREC’08), Marrakech, Morocco. Philipp Sorg and Philipp Cimiano. 2008. Enriching the crosslingual link structure of Wikipedia - a classification-based approach. In Proceedings of the AAAI 2008 Workshop on Wikipedia and Artifical Intelligence. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A Core of Semantic Knowledge. In Proceedings of the 16th International World Wide Web conference, WWW, New York, NY, USA. ACM Press. Zhiping Zeng, Anthony K. H. Tung, Jianyong Wang, Jianhua Feng, and Lizhu Zhou. 2009. Comparing stars: On approximating graph edit distance. Proceedings of the VLDB Endowment, 2(1):25–36. 853
2010
87
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 854–864, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation Michael Bloodgood Human Language Technology Center of Excellence Johns Hopkins University Baltimore, MD 21211 [email protected] Chris Callison-Burch Center for Language and Speech Processing Johns Hopkins University Baltimore, MD 21211 [email protected] Abstract We explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources. The main challenge is how to buck the trend of diminishing returns that is commonly encountered. We present an active learning-style data solicitation algorithm to meet this challenge. We test it, gathering annotations via Amazon Mechanical Turk, and find that we get an order of magnitude increase in performance rates of improvement. 1 Introduction Figure 1 shows the learning curves for two state of the art statistical machine translation (SMT) systems for Urdu-English translation. Observe how the learning curves rise rapidly at first but then a trend of diminishing returns occurs: put simply, the curves flatten. This paper investigates whether we can buck the trend of diminishing returns, and if so, how we can do it effectively. Active learning (AL) has been applied to SMT recently (Haffari et al., 2009; Haffari and Sarkar, 2009) but they were interested in starting with a tiny seed set of data, and they stopped their investigations after only adding a relatively tiny amount of data as depicted in Figure 1. In contrast, we are interested in applying AL when a large amount of data already exists as is the case for many important lanuage pairs. We develop an AL algorithm that focuses on keeping annotation costs (measured by time in seconds) low. It succeeds in doing this by only soliciting translations for parts of sentences. We show that this gets a savings in human annotation time above and beyond what the reduction in # words annotated would have indicated by a factor of about three and speculate as to why. 0 2 4 6 8 10 x 10 4 0 5 10 15 20 25 30 Number of Sentences in Training Data BLEU Score JSyntax and JHier Learning Curves on the LDC Urdu−English Language Pack (BLEU vs Sentences) jHier jSyntax as far as previous AL for SMT research studies were conducted where we begin our main investigations into bucking the trend of diminishing returns Figure 1: Syntax-based and Hierarchical PhraseBased MT systems’ learning curves on the LDC Urdu-English language pack. The x-axis measures the number of sentence pairs in the training data. The y-axis measures BLEU score. Note the diminishing returns as more data is added. Also note how relatively early on in the process previous studies were terminated. In contrast, the focus of our main experiments doesn’t even begin until much higher performance has already been achieved with a period of diminishing returns firmly established. We conduct experiments for Urdu-English translation, gathering annotations via Amazon Mechanical Turk (MTurk) and show that we can indeed buck the trend of diminishing returns, achieving an order of magnitude increase in the rate of improvement in performance. Section 2 discusses related work; Section 3 discusses preliminary experiments that show the guiding principles behind the algorithm we use; Section 4 explains our method for soliciting new translation data; Section 5 presents our main results; and Section 6 concludes. 854 2 Related Work Active learning has been shown to be effective for improving NLP systems and reducing annotation burdens for a number of NLP tasks (see, e.g., (Hwa, 2000; Sassano, 2002; Bloodgood and Vijay-Shanker, 2008; Bloodgood and VijayShanker, 2009b; Mairesse et al., 2010; Vickrey et al., 2010)). The current paper is most highly related to previous work falling into three main areas: use of AL when large corpora already exist; cost-focused AL; and AL for SMT. In a sense, the work of Banko and Brill (2001) is closely related to ours. Though their focus is mainly on investigating the performance of learning methods on giant corpora many orders of magnitude larger than previously used, they do lay out how AL might be useful to apply to acquire data to augment a large set cheaply because they recognize the problem of diminishing returns that we discussed in Section 1. The second area of work that is related to ours is previous work on AL that is cost-conscious. The vast majority of AL research has not focused on accurate cost accounting and a typical assumption is that each annotatable has equal annotation cost. An early exception in the AL for NLP field was the work of Hwa (2000), which makes a point of using # of brackets to measure cost for a syntactic analysis task instead of using # of sentences. Another relatively early work in our field along these lines was the work of Ngai and Yarowsky (2000), which measured actual times of annotation to compare the efficacy of rule writing versus annotation with AL for the task of BaseNP chunking. Osborne and Baldridge (2004) argued for the use of discriminant cost over unit cost for the task of Head Phrase Structure Grammar parse selection. King et al. (2004) design a robot that tests gene functions. The robot chooses which experiments to conduct by using AL and takes monetary costs (in pounds sterling) into account during AL selection and evaluation. Unlike our situation for SMT, their costs are all known beforehand because they are simply the cost of materials to conduct the experiments, which are already known to the robot. Hachey et al. (2005) showed that selectively sampled examples for an NER task took longer to annotate and had lower inter-annotator agreement. This work is related to ours because it shows that how examples are selected can impact the cost of annotation, an idea we turn around to use for our advantage when developing our data selection algorithm. Haertel et al. (2008) emphasize measuring costs carefully for AL for POS tagging. They develop a model based on a user study that can estimate the time required for POS annotating. Kapoor et al. (2007) assign costs for AL based on message length for a voicemail classification task. In contrast, we show for SMT that annotation times do not scale according to length in words and we show our method can achieve a speedup in annotation time above and beyond what the reduction in words would indicate. Tomanek and Hahn (2009) measure cost by # of tokens for an NER task. Their AL method only solicits labels for parts of sentences in the interest of reducing annotation effort. Along these lines, our method is similar in the respect that we also will only solicit annotation for parts of sentences, though we prefer to measure cost with time and we show that time doesn’t track with token length for SMT. Haffari et al. (2009), Haffari and Sarkar (2009), and Ambati et al. (2010) investigate AL for SMT. There are two major differences between our work and this previous work. One is that our intended use cases are very different. They deal with the more traditional AL setting of starting from an extremely small set of seed data. Also, by SMT standards, they only add a very tiny amount of data during AL. All their simulations top out at 10,000 sentences of labeled data and the models learned have relatively low translation quality compared to the state of the art. On the other hand, in the current paper, we demonstrate how to apply AL in situations where we already have large corpora. Our goal is to buck the trend of diminishing returns and use AL to add data to build some of the highest-performing MT systems in the world while keeping annotation costs low. See Figure 1 from Section 1, which contrasts where (Haffari et al., 2009; Haffari and Sarkar, 2009) stop their investigations with where we begin our studies. The other major difference is that (Haffari et al., 2009; Haffari and Sarkar, 2009) measure annotation cost by # of sentences. In contrast, we bring to light some potential drawbacks of this practice, showing it can lead to different conclusions than if other annotation cost metrics are used, such as time and money, which are the metrics that we use. 855 3 Simulation Experiments Here we report on results of simulation experiments that help to illustrate and motivate the design decisions of the algorithm we present in Section 4. We use the Urdu-English language pack1 from the Linguistic Data Consortium (LDC), which contains ≈88000 Urdu-English sentence translation pairs, amounting to ≈1.7 million Urdu words translated into English. All experiments in this paper evaluate on a genre-balanced split of the NIST2008 Urdu-English test set. In addition, the language pack contains an Urdu-English dictionary consisting of ≈114000 entries. In all the experiments, we use the dictionary at every iteration of training. This will make it harder for us to show our methods providing substantial gains since the dictionary will provide a higher base performance to begin with. However, it would be artificial to ignore dictionary resources when they exist. We experiment with two translation models: hierarchical phrase-based translation (Chiang, 2007) and syntax augmented translation (Zollmann and Venugopal, 2006), both of which are implemented in the Joshua decoder (Li et al., 2009). We hereafter refer to these systems as jHier and jSyntax, respectively. We will now present results of experiments with different methods for growing MT training data. The results are organized into three areas of investigations: 1. annotation costs; 2. managing uncertainty; and 3. how to automatically detect when to stop soliciting annotations from a pool of data. 3.1 Annotation Costs We begin our cost investigations with four simple methods for growing MT training data: random, shortest, longest, and VocabGrowth sentence selection. The first three methods are selfexplanatory. VocabGrowth (hereafter VG) selection is modeled after the best methods from previous work (Haffari et al., 2009; Haffari and Sarkar, 2009), which are based on preferring sentences that contain phrases that occur frequently in unlabeled data and infrequently in the so-far labeled data. Our VG method selects sentences for translation that contain n-grams (for n in {1,2,3,4}) that 1LDC Catalog No.: LDC2006E110. Init: Go through all available training data (labeled and unlabeled) and obtain frequency counts for every n-gram (n in {1, 2, 3, 4}) that occurs. sortedNGrams ←Sort n-grams by frequency in descending order. Loop until stopping criterion (see Section 3.3) is met 1. trigger ←Go down sortedNGrams list and find the first n-gram that isn’t covered in the so far labeled training data. 2. selectedSentence ←Find a sentence that contains trigger. 3. Remove selectedSentence from unlabeled data and add it to labeled training data. End Loop Figure 2: The VG sentence selection algorithm do not occur at all in our so-far labeled data. We call an n-gram “covered” if it occurs at least once in our so-far labeled data. VG has a preference for covering frequent n-grams before covering infrequent n-grams. The VG method is depicted in Figure 2. Figure 3 shows the learning curves for both jHier and jSyntax for VG selection and random selection. The y-axis measures BLEU score (Papineni et al., 2002),which is a fast automatic way of measuring translation quality that has been shown to correlate with human judgments and is perhaps the most widely used metric in the MT community. The x-axis measures the number of sentence translation pairs in the training data. The VG curves are cut off at the point at which the stopping criterion in Section 3.3 is met. From Figure 3 it might appear that VG selection is better than random selection, achieving higher-performing systems with fewer translations in the labeled data. However, it is important to take care when measuring annotation costs (especially for relatively complicated tasks such as translation). Figure 4 shows the learning curves for the same systems and selection methods as in Figure 3 but now the x-axis measures the number of foreign words in the training data. The difference between VG and random selection now appears smaller. For an extreme case, to illustrate the ramifica856 0 10,000 20,000 30,000 40,000 50,000 60,000 70,000 80,000 90,000 0 5 10 15 20 25 30 jHier and jSyntax: VG vs Random selection (BLEU vs Sents) Number of Sentence Pairs in the Training Data BLEU Score jHier: random selection jHier: VG selection jSyntax: random selection jSyntax: VG selection where we will start our main experiments where previous AL for SMT research stopped their experiments Figure 3: Random vs VG selection. The x-axis measures the number of sentence pairs in the training data. The y-axis measures BLEU score. tions of measuring translation annotation cost by # of sentences versus # of words, consider Figures 5 and 6. They both show the same three selection methods but Figure 5 measures the x-axis by # of sentences and Figure 6 measures by # of words. In Figure 5, one would conclude that shortest is a far inferior selection method to longest but in Figure 6 one would conclude the opposite. Measuring annotation time and cost in dollars are probably the most important measures of annotation cost. We can’t measure these for the simulated experiments but we will use time (in seconds) and money (in US dollars) as cost measures in Section 5, which discusses our nonsimulated AL experiments. If # sentences or # words track these other more relevant costs in predictable known relationships, then it would suffice to measure # sentences or # words instead. But it’s clear that different sentences can have very different annotation time requirements according to how long and complicated they are so we will not use # sentences as an annotation cost any more. It is not as clear how # words tracks with annotation time. In Section 5 we will present evidence showing that time per word can vary considerably and also show a method for soliciting annotations that reduces time per word by nearly a factor of three. As it is prudent to evaluate using accurate cost accounting, so it is also prudent to develop new AL algorithms that take costs carefully into account. Hence, reducing annotation time burdens 0 0.5 1 1.5 2 x 10 6 0 5 10 15 20 25 30 jHier and jSyntax: VG vs Random selection (BLEU vs FWords) Number of Foreign Words in Training Data BLEU Score jHier: random selection jHier: VG selection jSyntax: random selection jSyntax: VG selection Figure 4: Random vs VG selection. The x-axis measures the number of foreign words in the training data. The y-axis measures BLEU score. instead of the # of sentences translated (which might be quite a different thing) will be a cornerstone of the algorithm we describe in Section 4. 3.2 Managing Uncertainty One of the most successful of all AL methods developed to date is uncertainty sampling and it has been applied successfully many times (e.g.,(Lewis and Gale, 1994; Tong and Koller, 2002)). The intuition is clear: much can be learned (potentially) if there is great uncertainty. However, with MT being a relatively complicated task (compared with binary classification, for example), it might be the case that the uncertainty approach has to be re-considered. If words have never occurred in the training data, then uncertainty can be expected to be high. But we are concerned that if a sentence is translated for which (almost) no words have been seen in training yet, though uncertainty will be high (which is usually considered good for AL), the word alignments may be incorrect and then subsequent learning from that translation pair will be severely hampered. We tested this hypothesis and Figure 7 shows empirical evidence that it is true. Along with VG, two other selection methods’ learning curves are charted in Figure 7: mostNew, which prefers to select those sentences which have the largest # of unseen words in them; and moderateNew, which aims to prefer sentences that have a moderate # of unseen words, preferring sentences with ≈ten 857 0 2 4 6 8 10 x 10 4 0 5 10 15 20 25 jHiero: Random, Shortest, and Longest selection BLEU Score Number of Sentences in Training Data random shortest longest Figure 5: Random vs Shortest vs Longest selection. The x-axis measures the number of sentence pairs in the training data. The y-axis measures BLEU score. unknown words in them. One can see that mostNew underperforms VG. This could have been due to VG’s frequency component, which mostNew doesn’t have. But moderateNew also doesn’t have a frequency preference so it is likely that mostNew winds up overwhelming the MT training system, word alignments are incorrect, and less is learned as a result. In light of this, the algorithm we develop in Section 4 will be designed to avoid this word alignment danger. 3.3 Automatic Stopping The problem of automatically detecting when to stop AL is a substantial one, discussed at length in the literature (e.g., (Bloodgood and VijayShanker, 2009a; Schohn and Cohn, 2000; Vlachos, 2008)). In our simulation, we stop VG once all n-grams (n in {1,2,3,4}) have been covered. Though simple, this stopping criterion seems to work well as can be seen by where the curve for VG is cut off in Figures 3 and 4. It stops after 1,293,093 words have been translated, with jHier’s BLEU=21.92 and jSyntax’s BLEU=26.10 at the stopping point. The ending BLEU scores (with the full corpus annotated) are 21.87 and 26.01 for jHier and jSyntax, respectively. So our stopping criterion saves 22.3% of the annotation (in terms of words) and actually achieves slightly higher BLEU scores than if all the data were used. Note: this ”less is more” phenomenon 0 0.5 1 1.5 2 x 10 6 0 5 10 15 20 25 Number of Foreign Words in Training Data BLEU Score jHiero: Longest, Shortest, and Random Selection random shortest longest Figure 6: Random vs Shortest vs Longest selection. The x-axis measures the number of foreign words in the training data. The y-axis measures BLEU score. has been commonly observed in AL settings (e.g., (Bloodgood and Vijay-Shanker, 2009a; Schohn and Cohn, 2000)). 4 Highlighted N-Gram Method In this section we describe a method for soliciting human translations that we have applied successfully to improving translation quality in real (not simulated) conditions. We call the method the Highlighted N-Gram method, or HNG, for short. HNG solicits translations only for trigger n-grams and not for entire sentences. We provide sentential context, highlight the trigger n-gram that we want translated, and ask for a translation of just the highlighted trigger n-gram. HNG asks for translations for triggers in the same order that the triggers are encountered by the algorithm in Figure 2. A screenshot of our interface is depicted in Figure 8. The same stopping criterion is used as was used in the last section. When the stopping criterion becomes true, it is time to tap a new unlabeled pool of foreign text, if available. Our motivations for soliciting translations for only parts of sentences are twofold, corresponding to two possible cases. Case one is that a translation model learned from the so-far labeled data will be able to translate most of the non-trigger words in the sentence correctly. Thus, by asking a human to translate only the trigger words, we avoid wasting human translation effort. (We will show in 858 0 0.5 1 1.5 2 x 10 6 0 5 10 15 20 25 Number of Foreign Words in Training Data BLEU Score jHiero: VG vs mostNew vs moderateNew VG mostNew moderateNew Figure 7: VG vs MostNew vs ModerateNew selection. The x-axis measures the number of sentence pairs in the training data. The y-axis measures BLEU score. !" # $ % " & ' () ' * +,. / 0) 1 2 3 4 5 6 7 8 9:-! ! "#$ % $& '$ & ( ) * + ;<= ' $ > / ?@3 / A > . +B!C D )C E F G H ? I ' 3") D )+0) + & . "J & "J & "$ K $! 1 2 L )M 8 ':? 3 N !O # )P & G Q 6 - ' & R 7@* / & ST & ST & !9, 8 U V) W X ' 8 , " *)- ! . ( / 0 . " 2 3 4 !.C 2 3 4 ! D # 8 E Y ).<3 ' 8 M H 3 G: Z !"[ $ % ' 8 R 3 \ 5 # )T = 5 # ) ] ' 3E & > ' # )P 8 >< & . ^ : S _ <* ' ( * C+ & +: Z ' * / ` $> a U H $G X " & 5,. b ' 8 "$ c 9* S _ / & <* dH # $!<) + & ? e (@)f e 3< g : # 1 . 2 # 1 . 2 "(: Z . <* e @* ':) K ) C+) E # ) ' * +$ / H0) + & <* G:I ' 3 . 4 5 ' ' & ')C ',$ % " & 5 # : 6 8 +$ . ! 1 ') ' (,) 6 7 $ ! 8 ' ) 9 G Q )PI ' & U I ') .h X + & !.C ! 1 ' $ " i 3 !"-!f "(: Z ':) K ) / H0) ' . Figure 8: Screenshot of the interface we used for soliciting translations for triggers. the next section that we even get a much larger speedup above and beyond what the reduction in number of translated words would give us.) Case two is that a translation model learned from the sofar labeled data will (in addition to not being able to translate the trigger words correctly) also not be able to translate most of the non-trigger words correctly. One might think then that this would be a great sentence to have translated because the machine can potentially learn a lot from the translation. Indeed, one of the overarching themes of AL research is to query examples where uncertainty is greatest. But, as we showed evidence for in the last section, for the case of SMT, too much uncertainty could in a sense overwhelm the machine and it might be better to provide new training data in a more gradual manner. A sentence with large #s of unseen words is likely to get word-aligned incorrectly and then learning from that translation could be hampered. By asking for a translation of only the trigger words, we expect to be able to circumvent this problem in large part. The next section presents the results of experiments that show that the HNG algorithm is indeed practically effective. Also, the next section analyzes results regarding various aspects of HNG’s behavior in more depth. 5 Experiments and Discussion 5.1 General Setup We set out to see whether we could use the HNG method to achieve translation quality improvements by gathering additional translations to add to the training data of the entire LDC language pack, including its dictionary. In particular, we wanted to see if we could achieve translation improvements on top of already state-of-the-art performing systems trained already on the entire LDC corpus. Note that at the outset this is an ambitious endeavor (recall the flattening of the curves in Figure 1 from Section 1). Snow et al. (2008) explored the use of the Amazon Mechanical Turk (MTurk) web service for gathering annotations for a variety of natural language processing tasks and recently MTurk has been shown to be a quick, cost-effective way to gather Urdu-English translations (Bloodgood and Callison-Burch, 2010). We used the MTurk web service to gather our annotations. Specifically, we first crawled a large set of BBC articles on the internet in Urdu and used this as our unlabeled pool from which to gather annotations. We applied the HNG method from Section 4 to determine what to post on MTurk for workers to translate.2 We gathered 20,580 n-gram translations for which we paid $0.01 USD per translation, giving us a total cost of $205.80 USD. We also gathered 1632 randomly chosen Urdu sentence translations as a control set, for which we paid $0.10 USD per sentence translation.3 2For practical reasons we restricted ourselves to not considering sentences that were longer than 60 Urdu words, however. 3The prices we paid were not market-driven. We just chose prices we thought were reasonable. In hindsight, given how much quicker the phrase translations are for people we could have had a greater disparity in price. 859 5.2 Accounting for Translation Time MTurk returns with each assignment the “WorkTimeInSeconds.” This is the amount of time between when a worker accepts an assignment and when the worker submits the completed assignment. We use this value to estimate annotation times.4 Figure 9 shows HNG collection versus random collection from MTurk. The x-axis measures the number of seconds of annotation time. Note that HNG is more effective. A result that may be particularly interesting is that HNG results in a time speedup by more than just the reduction in translated words would indicate. The average time to translate a word of Urdu with the sentence postings to MTurk was 32.92 seconds. The average time to translate a word with the HNG postings to MTurk was 11.98 seconds. This is nearly three times faster. Figure 10 shows the distribution of speeds (in seconds per word) for HNG postings versus complete sentence postings. Note that the HNG postings consistently result in faster translation speeds than the sentence postings5. We hypothesize that this speedup comes about because when translating a full sentence, there’s the time required to examine each word and translate them in some sense (even if not one-to-one) and then there is an extra significant overhead time to put it all together and synthesize into a larger sentence translation. The factor of three speedup is evidence that this overhead is significant effort compared to just quickly translating short n-grams from a sentence. This speedup is an additional benefit of the HNG approach. 5.3 Bucking the Trend We gathered translations for ≈54,500 Urdu words via the use of HNG on MTurk. This is a relatively small amount, ≈3% of the LDC corpus. Figure 11 shows the performance when we add this training data to the LDC corpus. The rect4It’s imperfect because of network delays and if a person is multitasking or pausing between their accept and submit times. Nonetheless, the times ought to be better estimates as they are taken over larger samples. 5The average speed for the HNG postings seems to be slower than the histogram indicates. This is because there were a few extremely slow outlier speeds for a handful of HNG postings. These are almost certainly not cases when the turker is working continuously on the task and so the average speed we computed for the HNG postings might be slower than the actual speed and hence the true speedup may even be faster than indicated by the difference between the average speeds we reported. 0 1 2 3 4 5 6 x 10 5 21.6 21.8 22 22.2 22.4 22.6 22.8 Number of Seconds of Annotation Time BLEU Score jHier: HNG Collection vs Random Collection of Annotations from MTurk random HNG Figure 9: HNG vs Random collection of new data via MTurk. y-axis measures BLEU. x-axis measures annotation time in seconds. angle around the last 700,000 words of the LDC data is wide and short (it has a height of 0.9 BLEU points and a width of 700,000 words) but the rectangle around the newly added translations is narrow and tall (a height of 1 BLEU point and a width of 54,500 words). Visually, it appears we are succeeding in bucking the trend of diminishing returns. We further confirmed this by running a least-squares linear regression on the points of the last 700,000 words annotated in the LDC data and also for the points in the new data that we acquired via MTurk for $205.80 USD. We find that the slope fit to our new data is 6.6245E-06 BLEU points per Urdu word, or 6.6245 BLEU points for a million Urdu words. The slope fit to the LDC data is only 7.4957E-07 BLEU points per word, or only 0.74957 BLEU points for a million words. This is already an order of magnitude difference that would make the difference between it being worth adding more data and not being worth it; and this is leaving aside the added time speedup that our method enjoys. Still, we wondered why we could not have raised BLEU scores even faster. The main hurdle seems to be one of coverage. Of the 20,580 ngrams we collected, only 571 (i.e., 2.77%) of them ever even occur in the test set. 5.4 Beyond BLEU Scores BLEU is an imperfect metric (Callison-Burch et al., 2006). One reason is that it rates all ngram 860 0 20 40 60 80 100 120 0 0.05 0.1 0.15 0.2 0.25 Time (in seconds) per foreign word translated Relative Frequency Histogram showing the distribution of translation speeds (in seconds per foreign word) when translations are collected via n−grams versus via complete sentences n−grams sentences average time per word for sentences average time per word for n−grams Figure 10: Distribution of translation speeds (in seconds per word) for HNG postings versus complete sentence postings. The y-axis measures relative frequency. The x-axis measures translation speed in seconds per word (so farther to the left is faster). mismatches equally although some are much more important than others. Another reason is it’s not intuitive what a gain of x BLEU points means in practice. Here we show some concrete example translations to show the types of improvements we’re achieving and also some examples which suggest improvements we can make to our AL selection algorithm in the future. Figure 12 shows a prototypical example of our system working. Figure 13 shows an example where the strategy is working partially but not as well as it might. The Urdu phrase was translated by turkers as “gowned veil”. However, since the word aligner just aligns the word to “gowned”, we only see “gowned” in our output. This prompts a number of discussion points. First, the ‘after system’ has better translations but they’re not rewarded by BLEU scores because the references use the words ‘burqah’ or just ‘veil’ without ‘gowned’. Second, we hypothesize that we may be able to see improvements by overriding the automatic alignment software whenever we obtain a many-to-one or one-to-many (in terms of words) translation for one of our trigger phrases. In such cases, we’d like to make sure that every word on the ‘many’ side is aligned to the 1 1.2 1.4 1.6 1.8 x 10 6 21 21.5 22 22.5 23 23.5 Bucking the Trend: JHiero Translation Quality versus Number of Foreign Words Annotated BLEU Score Number of Foreign Words Annotated the approx. 54,500 foreign words we selectively sampled for annotation cost = $205.80 last approx. 700,000 foreign words annotated in LDC data Figure 11: Bucking the trend: performance of HNG-selected additional data from BBC web crawl data annotated via Amazon Mechanical Turk. y-axis measures BLEU. x-axis measures number of words annotated. Figure 12: Example of strategy working. single word on the ‘one’ side. For example, we would force both ‘gowned’ and ‘veil’ to be aligned to the single Urdu word instead of allowing the automatic aligner to only align ‘gowned’. Figure 14 shows an example where our “before” system already got the translation correct without the need for the additional phrase translation. This is because though the “before” system had never seen the Urdu expression for “12 May”, it had seen the Urdu words for “12” and “May” in isolation and was able to successfully compose them. An area of future work is to use the “before” system to determine such cases automatically and avoid asking humans to provide translations in such cases. 861 Figure 13: Example showing where we can improve our selection strategy. Figure 14: Example showing where we can improve our selection strategy. 6 Conclusions and Future Work We succeeded in bucking the trend of diminishing returns and improving translation quality while keeping annotation costs low. In future work we would like to apply these ideas to domain adaptation (say, general-purpose MT system to work for scientific domain such as chemistry). Also, we would like to test with more languages, increase the amount of data we can gather, and investigate stopping criteria further. Also, we would like to investigate increasing the efficiency of the selection algorithm by addressing issues such as the one raised by the 12 May example presented earlier. Acknowledgements This work was supported by the Johns Hopkins University Human Language Technology Center of Excellence. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsor. References Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010. Active learning and crowd-sourcing for machine translation. In Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC’10), Valletta, Malta, may. European Language Resources Association (ELRA). Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 26–33, Toulouse, France, July. Association for Computational Linguistics. Michael Bloodgood and Chris Callison-Burch. 2010. Using mechanical turk to build machine translation evaluation sets. In Proceedings of the Workshop on Creating Speech and Language Data With Amazon’s Mechanical Turk, Los Angeles, California, June. Association for Computational Linguistics. Michael Bloodgood and K Vijay-Shanker. 2008. An approach to reducing annotation costs for bionlp. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, pages 104–105, Columbus, Ohio, June. Association for Computational Linguistics. Michael Bloodgood and K Vijay-Shanker. 2009a. A method for stopping active learning based on stabilizing predictions and the need for user-adjustable stopping. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 39–47, Boulder, Colorado, June. Association for Computational Linguistics. Michael Bloodgood and K Vijay-Shanker. 2009b. Taking into account the differences between actively and passively acquired data: The case of active learning with support vector machines for imbalanced datasets. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 137– 140, Boulder, Colorado, June. Association for Computational Linguistics. Chris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in machine translation research. In 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006), Trento, Italy. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Ben Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 144–151, Ann Arbor, Michigan, June. Association for Computational Linguistics. Robbie Haertel, Eric Ringger, Kevin Seppi, James Carroll, and Peter McClanahan. 2008. Assessing the 862 costs of sampling methods in active learning for annotation. In Proceedings of ACL-08: HLT, Short Papers, pages 65–68, Columbus, Ohio, June. Association for Computational Linguistics. Gholamreza Haffari and Anoop Sarkar. 2009. Active learning for multilingual statistical machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 181–189, Suntec, Singapore, August. Association for Computational Linguistics. Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 415–423, Boulder, Colorado, June. Association for Computational Linguistics. Rebecca Hwa. 2000. Sample selection for statistical grammar induction. In Hinrich Sch¨utze and KehYih Su, editors, Proceedings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing, pages 45–53. Association for Computational Linguistics, Somerset, New Jersey. Ashish Kapoor, Eric Horvitz, and Sumit Basu. 2007. Selective supervision: Guiding supervised learning with decision-theoretic active learning. In Manuela M. Veloso, editor, IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 612, 2007, pages 877–882. Ross D. King, Kenneth E. Whelan, Ffion M. Jones, Philip G. K. Reiser, Christopher H. Bryant, Stephen H. Muggleton, Douglas B. Kell, and Stephen G. Oliver. 2004. Functional genomic hypothesis generation and experimentation by a robot scientist. Nature, 427:247–252, 15 January. David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR ’94: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval, pages 3–12, New York, NY, USA. Springer-Verlag New York, Inc. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, and Omar Zaidan. 2009. Joshua: An open source toolkit for parsing-based machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 135–139, Athens, Greece, March. Association for Computational Linguistics. Francois Mairesse, Milica Gasic, Filip Jurcicek, Simon Keizer, Jorge Prombonas, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), Uppsala, Sweden, July. Association for Computational Linguistics. Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: cost-efficient resource usage for base noun phrase chunking. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Miles Osborne and Jason Baldridge. 2004. Ensemblebased active learning for parse selection. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 89–96, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Association for Computational Linguistics. Manabu Sassano. 2002. An empirical study of active learning with support vector machines for japanese word segmentation. In ACL ’02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 505–512, Morristown, NJ, USA. Association for Computational Linguistics. Greg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proc. 17th International Conf. on Machine Learning, pages 839–846. Morgan Kaufmann, San Francisco, CA. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 254–263, Honolulu, Hawaii, October. Association for Computational Linguistics. Katrin Tomanek and Udo Hahn. 2009. Semisupervised active learning for sequence labeling. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1039–1047, Suntec, Singapore, August. Association for Computational Linguistics. Simon Tong and Daphne Koller. 2002. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research (JMLR), 2:45–66. David Vickrey, Oscar Kipersztok, and Daphne Koller. 2010. An active learning approach to finding related terms. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), Uppsala, Sweden, July. Association for Computational Linguistics. 863 Andreas Vlachos. 2008. A stopping criterion for active learning. Computer Speech and Language, 22(3):295–312. Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of the NAACL-2006 Workshop on Statistical Machine Translation (WMT06), New York, New York. 864
2010
88
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 865–874, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Creating Robust Supervised Classifiers via Web-Scale N-gram Data Shane Bergsma University of Alberta [email protected] Emily Pitler University of Pennsylvania [email protected] Dekang Lin Google, Inc. [email protected] Abstract In this paper, we systematically assess the value of using web-scale N-gram data in state-of-the-art supervised NLP classifiers. We compare classifiers that include or exclude features for the counts of various N-grams, where the counts are obtained from a web-scale auxiliary corpus. We show that including N-gram count features can advance the state-of-the-art accuracy on standard data sets for adjective ordering, spelling correction, noun compound bracketing, and verb part-of-speech disambiguation. More importantly, when operating on new domains, or when labeled training data is not plentiful, we show that using web-scale N-gram features is essential for achieving robust performance. 1 Introduction Many NLP systems use web-scale N-gram counts (Keller and Lapata, 2003; Nakov and Hearst, 2005; Brants et al., 2007). Lapata and Keller (2005) demonstrate good performance on eight tasks using unsupervised web-based models. They show web counts are superior to counts from a large corpus. Bergsma et al. (2009) propose unsupervised and supervised systems that use counts from Google’s N-gram corpus (Brants and Franz, 2006). Web-based models perform particularly well on generation tasks, where systems choose between competing sequences of output text (such as different spellings), as opposed to analysis tasks, where systems choose between abstract labels (such as part-of-speech tags or parse trees). In this work, we address two natural and related questions which these previous studies leave open: 1. Is there a benefit in combining web-scale counts with the features used in state-of-theart supervised approaches? 2. How well do web-based models perform on new domains or when labeled data is scarce? We address these questions on two generation and two analysis tasks, using both existing N-gram data and a novel web-scale N-gram corpus that includes part-of-speech information (Section 2). While previous work has combined web-scale features with other features in specific classification problems (Modjeska et al., 2003; Yang et al., 2005; Vadas and Curran, 2007b), we provide a multi-task, multi-domain comparison. Some may question why supervised approaches are needed at all for generation problems. Why not solely rely on direct evidence from a giant corpus? For example, for the task of prenominal adjective ordering (Section 3), a system that needs to describe a ball that is both big and red can simply check that big red is more common on the web than red big, and order the adjectives accordingly. It is, however, suboptimal to only use N-gram data. For example, ordering adjectives by direct web evidence performs 7% worse than our best supervised system (Section 3.2). No matter how large the web becomes, there will always be plausible constructions that never occur. For example, there are currently no pages indexed by Google with the preferred adjective ordering for bedraggled 56-year-old [professor]. Also, in a particular domain, words may have a non-standard usage. Systems trained on labeled data can learn the domain usage and leverage other regularities, such as suffixes and transitivity for adjective ordering. With these benefits, systems trained on labeled data have become the dominant technology in academic NLP. There is a growing recognition, however, that these systems are highly domain dependent. For example, parsers trained on annotated newspaper text perform poorly on other genres (Gildea, 2001). While many approaches have adapted NLP systems to specific domains (Tsuruoka et al., 2005; McClosky et al., 2006; Blitzer 865 et al., 2007; Daum´e III, 2007; Rimell and Clark, 2008), these techniques assume the system knows on which domain it is being used, and that it has access to representative data in that domain. These assumptions are unrealistic in many real-world situations; for example, when automatically processing a heterogeneous collection of web pages. How well do supervised and unsupervised NLP systems perform when used uncustomized, out-of-the-box on new domains, and how can we best design our systems for robust open-domain performance? Our results show that using web-scale N-gram data in supervised systems advances the state-ofthe-art performance on standard analysis and generation tasks. More importantly, when operating out-of-domain, or when labeled data is not plentiful, using web-scale N-gram data not only helps achieve good performance – it is essential. 2 Experiments and Data 2.1 Experimental Design We evaluate the benefit of N-gram data on multiclass classification problems. For each task, we have some labeled data indicating the correct output for each example. We evaluate with accuracy: the percentage of examples correctly classified in test data. We use one in-domain and two out-ofdomain test sets for each task. Statistical significance is assessed with McNemar’s test, p<0.01. We provide results for unsupervised approaches and the majority-class baseline for each task. For our supervised approaches, we represent the examples as feature vectors, and learn a classifier on the training vectors. There are two feature classes: features that use N-grams (N-GM) and those that do not (LEX). N-GM features are real-valued features giving the log-count of a particular N-gram in the auxiliary web corpus. LEX features are binary features that indicate the presence or absence of a particular string at a given position in the input. The name LEX emphasizes that they identify specific lexical items. The instantiations of both types of features depend on the task and are described in the corresponding sections. Each classifier is a linear Support Vector Machine (SVM), trained using LIBLINEAR (Fan et al., 2008) on the standard domain. We use the one-vsall strategy when there are more than two classes (in Section 4). We plot learning curves to measure the accuracy of the classifier when the number of labeled training examples varies. The size of the N-gram data and its counts remain constant. We always optimize the SVM’s (L2) regularization parameter on the in-domain development set. We present results with L2-SVM, but achieve similar results with L1-SVM and logistic regression. 2.2 Tasks and Labeled Data We study two generation tasks: prenominal adjective ordering (Section 3) and context-sensitive spelling correction (Section 4), followed by two analysis tasks: noun compound bracketing (Section 5) and verb part-of-speech disambiguation (Section 6). In each section, we provide references to the origin of the labeled data. For the out-of-domain Gutenberg and Medline data used in Sections 3 and 4, we generate examples ourselves.1 We chose Gutenberg and Medline in order to provide challenging, distinct domains from our training corpora. Our Gutenberg corpus consists of out-of-copyright books, automatically downloaded from the Project Gutenberg website.2 The Medline data consists of a large collection of online biomedical abstracts. We describe how labeled adjective and spelling examples are created from these corpora in the corresponding sections. 2.3 Web-Scale Auxiliary Data The most widely-used N-gram corpus is the Google 5-gram Corpus (Brants and Franz, 2006). For our tasks, we also use Google V2: a new N-gram corpus (also with N-grams of length oneto-five) that we created from the same one-trillionword snapshot of the web as the Google 5-gram Corpus, but with several enhancements. These include: 1) Reducing noise by removing duplicate sentences and sentences with a high proportion of non-alphanumeric characters (together filtering about 80% of the source data), 2) pre-converting all digits to the 0 character to reduce sparsity for numeric expressions, and 3) including the part-ofspeech (POS) tag distribution for each N-gram. The source data was automatically tagged with TnT (Brants, 2000), using the Penn Treebank tag set. Lin et al. (2010) provide more details on the 1http://webdocs.cs.ualberta.ca/∼bergsma/Robust/ provides our Gutenberg corpus, a link to Medline, and also the generated examples for both Gutenberg and Medline. 2www.gutenberg.org. All books just released in 2009 and thus unlikely to occur in the source data for our N-gram corpus (from 2006). Of course, with removal of sentence duplicates and also N-gram thresholding, the possible presence of a test sentence in the massive source data is unlikely to affect results. Carlson et al. (2008) reach a similar conclusion. 866 N-gram data and N-gram search tools. The third enhancement is especially relevant here, as we can use the POS distribution to collect counts for N-grams of mixed words and tags. For example, we have developed an N-gram search engine that can count how often the adjective unprecedented precedes another adjective in our web corpus (113K times) and how often it follows one (11K times). Thus, even if we haven’t seen a particular adjective pair directly, we can use the positional preferences of each adjective to order them. Early web-based models used search engines to collect N-gram counts, and thus could not use capitalization, punctuation, and annotations such as part-of-speech (Kilgarriff and Grefenstette, 2003). Using a POS-tagged web corpus goes a long way to addressing earlier criticisms of web-based NLP. 3 Prenominal Adjective Ordering Prenominal adjective ordering strongly affects text readability. For example, while the unprecedented statistical revolution is fluent, the statistical unprecedented revolution is not. Many NLP systems need to handle adjective ordering robustly. In machine translation, if a noun has two adjective modifiers, they must be ordered correctly in the target language. Adjective ordering is also needed in Natural Language Generation systems that produce information from databases; for example, to convey information (in sentences) about medical patients (Shaw and Hatzivassiloglou, 1999). We focus on the task of ordering a pair of adjectives independently of the noun they modify and achieve good performance in this setting. Following the set-up of Malouf (2000), we experiment on the 263K adjective pairs Malouf extracted from the British National Corpus (BNC). We use 90% of pairs for training, 5% for testing, and 5% for development. This forms our in-domain data.3 We create out-of-domain examples by tokenizing Medline and Gutenberg (Section 2.2), then POS-tagging them with CRFTagger (Phan, 2006). We create examples from all sequences of two adjectives followed by a noun. Like Malouf (2000), we assume that edited text has adjectives ordered fluently. We extract 13K and 9.1K out-of-domain pairs from Gutenberg and Medline, respectively.4 3BNC is not a domain per se (rather a balanced corpus), but has a style and vocabulary distinct from our OOD data. 4Like Malouf (2000), we convert our pairs to lower-case. Since the N-gram data includes case, we merge counts from the upper and lower case combinations. The input to the system is a pair of adjectives, (a1, a2), ordered alphabetically. The task is to classify this order as correct (the positive class) or incorrect (the negative class). Since both classes are equally likely, the majority-class baseline is around 50% on each of the three test sets. 3.1 Supervised Adjective Ordering 3.1.1 LEX features Our adjective ordering model with LEX features is a novel contribution of this paper. We begin with two features for each pair: an indicator feature for a1, which gets a feature value of +1, and an indicator feature for a2, which gets a feature value of −1. The parameters of the model are therefore weights on specific adjectives. The higher the weight on an adjective, the more it is preferred in the first position of a pair. If the alphabetic ordering is correct, the weight on a1 should be higher than the weight on a2, so that the classifier returns a positive score. If the reverse ordering is preferred, a2 should receive a higher weight. Training the model in this setting is a matter of assigning weights to all the observed adjectives such that the training pairs are maximally ordered correctly. The feature weights thus implicitly produce a linear ordering of all observed adjectives. The examples can also be regarded as rank constraints in a discriminative ranker (Joachims, 2002). Transitivity is achieved naturally in that if we correctly order pairs a ≺b and b ≺c in the training set, then a ≺c by virtue of the weights on a and c. While exploiting transitivity has been shown to improve adjective ordering, there are many conflicting pairs that make a strict linear ordering of adjectives impossible (Malouf, 2000). We therefore provide an indicator feature for the pair a1a2, so the classifier can memorize exceptions to the linear ordering, breaking strict order transitivity. Our classifier thus operates along the lines of rankers in the preference-based setting as described in Ailon and Mohri (2008). Finally, we also have features for all suffixes of length 1-to-4 letters, as these encode useful information about adjective class (Malouf, 2000). Like the adjective features, the suffix features receive a value of +1 for adjectives in the first position and −1 for those in the second. 3.1.2 N-GM features Lapata and Keller (2005) propose a web-based approach to adjective ordering: take the most867 System IN O1 O2 Malouf (2000) 91.5 65.6 71.6 web c(a1, a2) vs. c(a2, a1) 87.1 83.7 86.0 SVM with N-GM features 90.0 85.8 88.5 SVM with LEX features 93.0 70.0 73.9 SVM with N-GM + LEX 93.7 83.6 85.4 Table 1: Adjective ordering accuracy (%). SVM and Malouf (2000) trained on BNC, tested on BNC (IN), Gutenberg (O1), and Medline (O2). frequent order of the words on the web, c(a1, a2) vs. c(a2, a1). We adopt this as our unsupervised approach. We merge the counts for the adjectives occurring contiguously and separated by a comma. These are indubitably the most important N-GM features; we include them but also other, tag-based counts from Google V2. Raw counts include cases where one of the adjectives is not used as a modifier: “the special present was” vs. “the present special issue.” We include log-counts for the following, more-targeted patterns:5 c(a1 a2 N.*), c(a2 a1 N.*), c(DT a1 a2 N.*), c(DT a2 a1 N.*). We also include features for the log-counts of each adjective preceded or followed by a word matching an adjective-tag: c(a1 J.*), c(J.* a1), c(a2 J.*), c(J.* a2). These assess the positional preferences of each adjective. Finally, we include the log-frequency of each adjective. The more frequent adjective occurs first 57% of the time. As in all tasks, the counts are features in a classifier, so the importance of the different patterns is weighted discriminatively during training. 3.2 Adjective Ordering Results In-domain, with both feature classes, we set a strong new standard on this data: 93.7% accuracy for the N-GM+LEX system (Table 1). We trained and tested Malouf (2000)’s program on our data; our LEX classifier, which also uses no auxiliary corpus, makes 18% fewer errors than Malouf’s system. Our web-based N-GM model is also superior to the direct evidence web-based approach of Lapata and Keller (2005), scoring 90.0% vs. 87.1% accuracy. These results show the benefit of our new lexicalized and web-based features. Figure 1 gives the in-domain learning curve. With fewer training examples, the systems with N-GM features strongly outperform the LEX-only system. Note that with tens of thousands of test 5In this notation, capital letters (and regular expressions) are matched against tags while a1 and a2 match words. 60 65 70 75 80 85 90 95 100 1e5 1e4 1e3 100 Accuracy (%) Number of training examples N-GM+LEX N-GM LEX Figure 1: In-domain learning curve of adjective ordering classifiers on BNC. 60 65 70 75 80 85 90 95 100 1e5 1e4 1e3 100 Accuracy (%) Number of training examples N-GM+LEX N-GM LEX Figure 2: Out-of-domain learning curve of adjective ordering classifiers on Gutenberg. examples, all differences are highly significant. Out-of-domain, LEX’s accuracy drops a shocking 23% on Gutenberg and 19% on Medline (Table 1). Malouf (2000)’s system fares even worse. The overlap between training and test pairs helps explain. While 59% of the BNC test pairs were seen in the training corpus, only 25% of Gutenberg and 18% of Medline pairs were seen in training. While other ordering models have also achieved “very poor results” out-of-domain (Mitchell, 2009), we expected our expanded set of LEX features to provide good generalization on new data. Instead, LEX is very unreliable on new domains. N-GM features do not rely on specific pairs in training data, and thus remain fairly robust crossdomain. Across the three test sets, 84-89% of examples had the correct ordering appear at least once on the web. On new domains, the learned N-GM system maintains an advantage over the unsupervised c(a1, a2) vs. c(a2, a1), but the difference is reduced. Note that training with 10-fold 868 cross validation, the N-GM system can achieve up to 87.5% on Gutenberg (90.0% for N-GM + LEX). The learning curve showing performance on Gutenberg (but still training on BNC) is particularly instructive (Figure 2, performance on Medline is very similar). The LEX system performs much worse than the web-based models across all training sizes. For our top in-domain system, N-GM + LEX, as you add more labeled examples, performance begins decreasing out-ofdomain. The system disregards the robust N-gram counts as it is more and more confident in the LEX features, and it suffers the consequences. 4 Context-Sensitive Spelling Correction We now turn to the generation problem of contextsensitive spelling correction. For every occurrence of a word in a pre-defined set of confusable words (like peace and piece), the system must select the most likely word from the set, flagging possible usage errors when the predicted word disagrees with the original. Contextual spell checkers are one of the most widely used NLP technologies, reaching millions of users via compressed N-gram models in Microsoft Office (Church et al., 2007). Our in-domain examples are from the New York Times (NYT) portion of Gigaword, from Bergsma et al. (2009). They include the 5 confusion sets where accuracy was below 90% in Golding and Roth (1999). There are 100K training, 10K development, and 10K test examples for each confusion set. Our results are averages across confusion sets. Out-of-domain examples are again drawn from Gutenberg and Medline. We extract all instances of words that are in one of our confusion sets, along with surrounding context. By assuming the extracted instances represent correct usage, we label 7.8K and 56K out-of-domain test examples for Gutenberg and Medline, respectively. We test three unsupervised systems: 1) Lapata and Keller (2005) use one token of context on the left and one on the right, and output the candidate from the confusion set that occurs most frequently in this pattern. 2) Bergsma et al. (2009) measure the frequency of the candidates in all the 3-to-5gram patterns that span the confusable word. For each candidate, they sum the log-counts of all patterns filled with the candidate, and output the candidate with the highest total. 3) The baseline predicts the most frequent member of each confusion set, based on frequencies in the NYT training data. System IN O1 O2 Baseline 66.9 44.6 60.6 Lapata and Keller (2005) 88.4 78.0 87.4 Bergsma et al. (2009) 94.8 87.7 94.2 SVM with N-GM features 95.7 92.1 93.9 SVM with LEX features 95.2 85.8 91.0 SVM with N-GM + LEX 96.5 91.9 94.8 Table 2: Spelling correction accuracy (%). SVM trained on NYT, tested on NYT (IN) and out-ofdomain Gutenberg (O1) and Medline (O2). 70 75 80 85 90 95 100 1e5 1e4 1e3 100 Accuracy (%) Number of training examples N-GM+LEX N-GM LEX Figure 3: In-domain learning curve of spelling correction classifiers on NYT. 4.1 Supervised Spelling Correction Our LEX features are typical disambiguation features that flag specific aspects of the context. We have features for the words at all positions in a 9-word window (called collocation features by Golding and Roth (1999)), plus indicators for a particular word preceding or following the confusable word. We also include indicators for all N-grams, and their position, in a 9-word window. For N-GM count features, we follow Bergsma et al. (2009). We include the log-counts of all N-grams that span the confusable word, with each word in the confusion set filling the N-gram pattern. These features do not use part-of-speech. Following Bergsma et al. (2009), we get N-gram counts using the original Google N-gram Corpus. While neither our LEX nor N-GM features are novel on their own, they have, perhaps surprisingly, not yet been evaluated in a single model. 4.2 Spelling Correction Results The N-GM features outperform the LEX features, 95.7% vs. 95.2% (Table 2). Together, they achieve a very strong 96.5% in-domain accuracy. 869 This is 2% higher than the best unsupervised approach (Bergsma et al., 2009). Web-based models again perform well across a range of training data sizes (Figure 3). The error rate of LEX nearly triples on Gutenberg and almost doubles on Medline (Table 2). Removing N-GM features from the N-GM + LEX system, errors increase around 75% on both Gutenberg and Medline. The LEX features provide no help to the combined system on Gutenberg, while they do help significantly on Medline. Note the learning curves for N-GM+LEX on Gutenberg and Medline (not shown) do not display the decrease that we observed in adjective ordering (Figure 2). Both the baseline and LEX perform poorly on Gutenberg. The baseline predicts the majority class from NYT, but it’s not always the majority class in Gutenberg. For example, while in NYT site occurs 87% of the time for the (cite, sight, site) confusion set, sight occurs 90% of the time in Gutenberg. The LEX classifier exploits this bias as it is regularized toward a more economical model, but the bias does not transfer to the new domain. 5 Noun Compound Bracketing About 70% of web queries are noun phrases (Barr et al., 2008) and methods that can reliably parse these phrases are of great interest in NLP. For example, a web query for zebra hair straightener should be bracketed as (zebra (hair straightener)), a stylish hair straightener with zebra print, rather than ((zebra hair) straightener), a useless product since the fur of zebras is already quite straight. The noun compound (NC) bracketing task is usually cast as a decision whether a 3-word NC has a left or right bracketing. Most approaches are unsupervised, using a large corpus to compare the statistical association between word pairs in the NC. The adjacency model (Marcus, 1980) proposes a left bracketing if the association between words one and two is higher than between two and three. The dependency model (Lauer, 1995a) compares one-two vs. one-three. We include dependency model results using PMI as the association measure; results were lower with the adjacency model. As in-domain data, we use Vadas and Curran (2007a)’s Wall-Street Journal (WSJ) data, an extension of the Treebank (which originally left NPs flat). We extract all sequences of three consecutive common nouns, generating 1983 examples System IN O1 O2 Baseline 70.5 66.8 84.1 Dependency model 74.7 82.8 84.4 SVM with N-GM features 89.5 81.6 86.2 SVM with LEX features 81.1 70.9 79.0 SVM with N-GM + LEX 91.6 81.6 87.4 Table 3: NC-bracketing accuracy (%). SVM trained on WSJ, tested on WSJ (IN) and out-ofdomain Grolier (O1) and Medline (O2). 60 65 70 75 80 85 90 95 100 1e3 100 10 Accuracy (%) Number of labeled examples N-GM+LEX N-GM LEX Figure 4: In-domain NC-bracketer learning curve from sections 0-22 of the Treebank as training, 72 from section 24 for development and 95 from section 23 as a test set. As out-of-domain data, we use 244 NCs from Grolier Encyclopedia (Lauer, 1995a) and 429 NCs from Medline (Nakov, 2007). The majority class baseline is left-bracketing. 5.1 Supervised Noun Bracketing Our LEX features indicate the specific noun at each position in the compound, plus the three pairs of nouns and the full noun triple. We also add features for the capitalization pattern of the sequence. N-GM features give the log-count of all subsets of the compound. Counts are from Google V2. Following Nakov and Hearst (2005), we also include counts of noun pairs collapsed into a single token; if a pair occurs often on the web as a single unit, it strongly indicates the pair is a constituent. Vadas and Curran (2007a) use simpler features, e.g. they do not use collapsed pair counts. They achieve 89.9% in-domain on WSJ and 80.7% on Grolier. Vadas and Curran (2007b) use comparable features to ours, but do not test out-of-domain. 5.2 Noun Compound Bracketing Results N-GM systems perform much better on this task (Table 3). N-GM+LEX is statistically significantly 870 better than LEX on all sets. In-domain, errors more than double without N-GM features. LEX performs poorly here because there are far fewer training examples. The learning curve (Figure 4) looks much like earlier in-domain curves (Figures 1 and 3), but truncated before LEX becomes competitive. The absence of a sufficient amount of labeled data explains why NC-bracketing is generally regarded as a task where corpus counts are crucial. All web-based models (including the dependency model) exceed 81.5% on Grolier, which is the level of human agreement (Lauer, 1995b). N-GM + LEX is highest on Medline, and close to the 88% human agreement (Nakov and Hearst, 2005). Out-of-domain, the LEX approach performs very poorly, close to or below the baseline accuracy. With little training data and crossdomain usage, N-gram features are essential. 6 Verb Part-of-Speech Disambiguation Our final task is POS-tagging. We focus on one frequent and difficult tagging decision: the distinction between a past-tense verb (VBD) and a past participle (VBN). For example, in the troops stationed in Iraq, the verb stationed is a VBN; troops is the head of the phrase. On the other hand, for the troops vacationed in Iraq, the verb vacationed is a VBD and also the head. Some verbs make the distinction explicit (eat has VBD ate, VBN eaten), but most require context for resolution. Conflating VBN/VBD is damaging because it affects downstream parsers and semantic role labelers. The task is difficult because nearby POS tags can be identical in both cases. When the verb follows a noun, tag assignment can hinge on world-knowledge, i.e., the global lexical relation between the noun and verb (E.g., troops tends to be the object of stationed but the subject of vacationed).6 Web-scale N-gram data might help improve the VBN/VBD distinction by providing relational evidence, even if the verb, noun, or verbnoun pair were not observed in training data. We extract nouns followed by a VBN/VBD in the WSJ portion of the Treebank (Marcus et al., 1993), getting 23K training, 1091 development and 1130 test examples from sections 2-22, 24, and 23, respectively. For out-of-domain data, we get 21K 6HMM-style taggers, like the fast TnT tagger used on our web corpus, do not use bilexical features, and so perform especially poorly on these cases. One motivation for our work was to develop a fast post-processor to fix VBN/VBD errors. examples from the Brown portion of the Treebank and 6296 examples from tagged Medline abstracts in the PennBioIE corpus (Kulick et al., 2004). The majority class baseline is to choose VBD. 6.1 Supervised Verb Disambiguation There are two orthogonal sources of information for predicting VBN/VBD: 1) the noun-verb pair, and 2) the context around the pair. Both N-GM and LEX features encode both these sources. 6.1.1 LEX features For 1), we use indicators for the noun and verb, the noun-verb pair, whether the verb is on an inhouse list of said-verb (like warned, announced, etc.), whether the noun is capitalized and whether it’s upper-case. Note that in training data, 97.3% of capitalized nouns are followed by a VBD and 98.5% of said-verbs are VBDs. For 2), we provide indicator features for the words before the noun and after the verb. 6.1.2 N-GM features For 1), we characterize a noun-verb relation via features for the pair’s distribution in Google V2. Characterizing a word by its distribution has a long history in NLP; we apply similar techniques to relations, like Turney (2006), but with a larger corpus and richer annotations. We extract the 20 most-frequent N-grams that contain both the noun and the verb in the pair. For each of these, we convert the tokens to POS-tags, except for tokens that are among the most frequent 100 unigrams in our corpus, which we include in word form. We mask the noun of interest as N and the verb of interest as V. This converted N-gram is the feature label. The value is the pattern’s log-count. A high count for patterns like (N that V), (N have V) suggests the relation is a VBD, while patterns (N that were V), (N V by), (V some N) indicate a VBN. As always, the classifier learns the association between patterns and classes. For 2), we use counts for the verb’s context cooccurring with a VBD or VBN tag. E.g., we see whether VBD cases like troops ate or VBN cases like troops eaten are more frequent. Although our corpus contains many VBN/VBD errors, we hope the errors are random enough for aggregate counts to be useful. The context is an N-gram spanning the VBN/VBD. We have log-count features for all five such N-grams in the (previous-word, noun, verb, next-word) quadruple. The log-count is in871 System IN O1 O2 Baseline 89.2 85.2 79.6 ContextSum 92.5 91.1 90.4 SVM with N-GM features 96.1 93.4 93.8 SVM with LEX features 95.8 93.4 93.0 SVM with N-GM + LEX 96.4 93.5 94.0 Table 4: Verb-POS-disambiguation accuracy (%) trained on WSJ, tested on WSJ (IN) and out-ofdomain Brown (O1) and Medline (O2). 80 85 90 95 100 1e4 1e3 100 Accuracy (%) Number of training examples N-GM (N,V+context) LEX (N,V+context) N-GM (N,V) LEX (N,V) Figure 5: Out-of-domain learning curve of verb disambiguation classifiers on Medline. dexed by the position and length of the N-gram. We include separate count features for contexts matching the specific noun and for when the noun token can match any word tagged as a noun. ContextSum: We use these context counts in an unsupervised system, ContextSum. Analogously to Bergsma et al. (2009), we separately sum the log-counts for all contexts filled with VBD and then VBN, outputting the tag with the higher total. 6.2 Verb POS Disambiguation Results As in all tasks, N-GM+LEX has the best in-domain accuracy (96.4%, Table 4). Out-of-domain, when N-grams are excluded, errors only increase around 14% on Medline and 2% on Brown (the differences are not statistically significant). Why? Figure 5, the learning curve for performance on Medline, suggests some reasons. We omit N-GM+LEX from Figure 5 as it closely follows N-GM. Recall that we grouped the features into two views: 1) noun-verb (N,V) and 2) context. If we use just (N,V) features, we do see a large drop outof-domain: LEX (N,V) lags N-GM (N,V) even using all the training examples. The same is true using only context features (not shown). Using both views, the results are closer: 93.8% for N-GM and 93.0% for LEX. With two views of an example, LEX is more likely to have domain-neutral features to draw on. Data sparsity is reduced. Also, the Treebank provides an atypical number of labeled examples for analysis tasks. In a more typical situation with less labeled examples, N-GM strongly dominates LEX, even when two views are used. E.g., with 2285 training examples, N-GM+LEX is statistically significantly better than LEX on both out-of-domain sets. All systems, however, perform log-linearly with training size. In other tasks we only had a handful of N-GM features; here there are 21K features for the distributional patterns of N,V pairs. Reducing this feature space by pruning or performing transformations may improve accuracy in and out-ofdomain. 7 Discussion and Future Work Of all classifiers, LEX performs worst on all crossdomain tasks. Clearly, many of the regularities that a typical classifier exploits in one domain do not transfer to new genres. N-GM features, however, do not depend directly on training examples, and thus work better cross-domain. Of course, using web-scale N-grams is not the only way to create robust classifiers. Counts from any large auxiliary corpus may also help, but web counts should help more (Lapata and Keller, 2005). Section 6.2 suggests that another way to mitigate domaindependence is having multiple feature views. Banko and Brill (2001) argue “a logical next step for the research community would be to direct efforts towards increasing the size of annotated training collections.” Assuming we really do want systems that operate beyond the specific domains on which they are trained, the community also needs to identify which systems behave as in Figure 2, where the accuracy of the best in-domain system actually decreases with more training examples. Our results suggest better features, such as web pattern counts, may help more than expanding training data. Also, systems using webscale unlabeled data will improve automatically as the web expands, without annotation effort. In some sense, using web counts as features is a form of domain adaptation: adapting a web model to the training domain. How do we ensure these features are adapted well and not used in domain-specific ways (especially with many features to adapt, as in Section 6)? One option may 872 be to regularize the classifier specifically for outof-domain accuracy. We found that adjusting the SVM misclassification penalty (for more regularization) can help or hurt out-of-domain. Other regularizations are possible. In each task, there are domain-neutral unsupervised approaches. We could encode these systems as linear classifiers with corresponding weights. Rather than a typical SVM that minimizes the weight-norm ||w|| (plus the slacks), we could regularize toward domainneutral weights. This regularization could be optimized on creative splits of the training data. 8 Conclusion We presented results on tasks spanning a range of NLP research: generation, disambiguation, parsing and tagging. Using web-scale N-gram data improves accuracy on each task. When less training data is used, or when the system is used on a different domain, N-gram features greatly improve performance. Since most supervised NLP systems do not use web-scale counts, further cross-domain evaluation may reveal some very brittle systems. Continued effort in new domains should be a priority for the community going forward. Acknowledgments We gratefully acknowledge the Center for Language and Speech Processing at Johns Hopkins University for hosting the workshop at which part of this research was conducted. References Nir Ailon and Mehryar Mohri. 2008. An efficient reduction of ranking to classification. In COLT. Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In ACL. Cory Barr, Rosie Jones, and Moira Regelson. 2008. The linguistic structure of English web-search queries. In EMNLP. Shane Bergsma, Dekang Lin, and Randy Goebel. 2009. Web-scale N-gram models for lexical disambiguation. In IJCAI. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. Thorsten Brants and Alex Franz. 2006. The Google Web 1T 5-gram Corpus Version 1.1. LDC2006T13. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In EMNLP. Thorsten Brants. 2000. TnT – a statistical part-ofspeech tagger. In ANLP. Andrew Carlson, Tom M. Mitchell, and Ian Fette. 2008. Data analysis project: Leveraging massive textual corpora using n-gram statistics. Technial Report CMU-ML-08-107. Kenneth Church, Ted Hart, and Jianfeng Gao. 2007. Compressing trigram language models with Golomb coding. In EMNLP-CoNLL. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. In ACL. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9. Dan Gildea. 2001. Corpus variation and parser performance. In EMNLP. Andrew R. Golding and Dan Roth. 1999. A Winnowbased approach to context-sensitive spelling correction. Machine Learning, 34(1-3):107–130. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In KDD. Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3):459–484. Adam Kilgarriff and Gregory Grefenstette. 2003. Introduction to the special issue on the Web as corpus. Computational Linguistics, 29(3):333–347. Seth Kulick, Ann Bies, Mark Liberman, Mark Mandel, Ryan McDonald, Martha Palmer, Andrew Schein, Lyle Ungar, Scott Winters, and Pete White. 2004. Integrated annotation for biomedical information extraction. In BioLINK 2004: Linking Biological Literature, Ontologies and Databases. Mirella Lapata and Frank Keller. 2005. Web-based models for natural language processing. ACM Transactions on Speech and Language Processing, 2(1):1–31. Mark Lauer. 1995a. Corpus statistics meet the noun compound: Some empirical results. In ACL. Mark Lauer. 1995b. Designing Statistical Language Learners: Experiments on Compound Nouns. Ph.D. thesis, Macquarie University. Dekang Lin, Kenneth Church, Heng Ji, Satoshi Sekine, David Yarowsky, Shane Bergsma, Kailash Patil, Emily Pitler, Rachel Lathbury, Vikram Rao, Kapil Dalwani, and Sushant Narsale. 2010. New tools for web-scale N-grams. In LREC. 873 Robert Malouf. 2000. The order of prenominal adjectives in natural language generation. In ACL. Mitchell P. Marcus, Beatrice Santorini, and Mary Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Mitchell P. Marcus. 1980. Theory of Syntactic Recognition for Natural Languages. MIT Press, Cambridge, MA, USA. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In COLING-ACL. Margaret Mitchell. 2009. Class-based ordering of prenominal modifiers. In 12th European Workshop on Natural Language Generation. Natalia N. Modjeska, Katja Markert, and Malvina Nissim. 2003. Using the Web in machine learning for other-anaphora resolution. In EMNLP. Preslav Nakov and Marti Hearst. 2005. Search engine statistics beyond the n-gram: Application to noun compound bracketing. In CoNLL. Preslav Ivanov Nakov. 2007. Using the Web as an Implicit Training Set: Application to Noun Compound Syntax and Semantics. Ph.D. thesis, University of California, Berkeley. Xuan-Hieu Phan. 2006. CRFTagger: CRF English POS Tagger. crftagger.sourceforge.net. Laura Rimell and Stephen Clark. 2008. Adapting a lexicalized-grammar parser to contrasting domains. In EMNLP. James Shaw and Vasileios Hatzivassiloglou. 1999. Ordering among premodifiers. In ACL. Yoshimasa Tsuruoka, Yuka Tateishi, Jin-Dong Kim, Tomoko Ohta, John McNaught, Sophia Ananiadou, and Jun’ichi Tsujii. 2005. Developing a robust partof-speech tagger for biomedical text. In Advances in Informatics. Peter D. Turney. 2006. Similarity of semantic relations. Computational Linguistics, 32(3):379–416. David Vadas and James R. Curran. 2007a. Adding noun phrase structure to the Penn Treebank. In ACL. David Vadas and James R. Curran. 2007b. Large-scale supervised models for noun phrase bracketing. In PACLING. Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2005. Improving pronoun resolution using statistics-based semantic compatibility information. In ACL. 874
2010
89
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 79–87, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A Risk Minimization Framework for Extractive Speech Summarization Shih-Hsiang Lin and Berlin Chen National Taiwan Normal University Taipei, Taiwan {shlin, berlin}@csie.ntnu.edu.tw Abstract In this paper, we formulate extractive summarization as a risk minimization problem and propose a unified probabilistic framework that naturally combines supervised and unsupervised summarization models to inherit their individual merits as well as to overcome their inherent limitations. In addition, the introduction of various loss functions also provides the summarization framework with a flexible but systematic way to render the redundancy and coherence relationships among sentences and between sentences and the whole document, respectively. Experiments on speech summarization show that the methods deduced from our framework are very competitive with existing summarization approaches. 1 Introduction Automated summarization systems which enable user to quickly digest the important information conveyed by either a single or a cluster of documents are indispensible for managing the rapidly growing amount of textual information and multimedia content (Mani and Maybury, 1999). On the other hand, due to the maturity of text summarization, the research paradigm has been extended to speech summarization over the years (Furui et al., 2004; McKeown et al., 2005). Speech summarization is expected to distill important information and remove redundant and incorrect information caused by recognition errors from spoken documents, enabling user to efficiently review spoken documents and understand the associated topics quickly. It would also be useful for improving the efficiency of a number of potential applications like retrieval and mining of large volumes of spoken documents. A summary can be either abstractive or extractive. In abstractive summarization, a fluent and concise abstract that reflects the key concepts of a document is generated, whereas in extractive summarization, the summary is usually formed by selecting salient sentences from the original document (Mani and Maybury, 1999). The former requires highly sophisticated natural language processing techniques, including semantic representation and inference, as well as natural language generation, while this would make abstractive approaches difficult to replicate or extend from constrained domains to more general domains. In addition to being extractive or abstractive, a summary may also be generated by considering several other aspects like being generic or query-oriented summarization, singledocument or multi-document summarization, and so forth. The readers may refer to (Mani and Maybury, 1999) for a comprehensive overview of automatic text summarization. In this paper, we focus exclusively on generic, singledocument extractive summarization which forms the building block for many other summarization tasks. Aside from traditional ad-hoc extractive summarization methods (Mani and Maybury, 1999), machine-learning approaches with either supervised or unsupervised learning strategies have gained much attention and been applied with empirical success to many summarization tasks (Kupiec et al., 1999; Lin et al., 2009). For supervised learning strategies, the summarization task is usually cast as a two-class (summary and nonsummary) sentence-classification problem: A sentence with a set of indicative features is input to the classifier (or summarizer) and a decision is then returned from it on the basis of these features. In general, they usually require a training set, comprised of several documents and their corresponding handcrafted summaries (or labeled data), to train the classifiers. However, manual labeling is expensive in terms of time and personnel. The other potential problem is the socalled “bag-of-sentences” assumption implicitly made by most of these summarizers. That is, sentences are classified independently of each other, 79 without leveraging the dependence relationships among the sentences or the global structure of the document (Shen et al., 2007). Another line of thought attempts to conduct document summarization using unsupervised machine-learning approaches, getting around the need for manually labeled training data. Most previous studies conducted along this line have their roots in the concept of sentence centrality (Gong and Liu, 2001; Erkan and Radev, 2004; Radev et al., 2004; Mihalcea and Tarau, 2005). Put simply, sentences more similar to others are deemed more salient to the main theme of the document; such sentences thus will be selected as part of the summary. Even though the performance of unsupervised summarizers is usually worse than that of supervised summarizers, their domain-independent and easy-to-implement properties still make them attractive. Building on these observations, we expect that researches conducted along the above-mentioned two directions could complement each other, and it might be possible to inherit their individual merits to overcome their inherent limitations. In this paper, we present a probabilistic summarization framework stemming from Bayes decision theory (Berger, 1985) for speech summarization. This framework can not only naturally integrate the above-mentioned two modeling paradigms but also provide a flexible yet systematic way to render the redundancy and coherence relationships among sentences and between sentences and the whole document, respectively. Moreover, we also illustrate how the proposed framework can unify several existing summarization models. The remainder of this paper is structured as follows. We start by reviewing related work on extractive summarization. In Section 3 we formulate the extractive summarization task as a risk minimization problem, followed by a detailed elucidation of the proposed methods in Section 4. Then, the experimental setup and a series of experiments and associated discussions are presented in Sections 5 and 6, respectively. Finally, Section 7 concludes our presentation and discusses avenues for future work. 2 Background Speech summarization can be conducted using either supervised or unsupervised methods (Furui et al., 2004, McKeown et al., 2005, Lin et al., 2008). In the following, we briefly review a few celebrated methods that have been applied to extractive speech summarization tasks with good success. 2.1 Supervised summarizers Extractive speech summarization can be treated as a two-class (positive/negative) classification problem. A spoken sentence iS is characterized by set of T indicative features   iT i i x x X , , 1   , and they may include lexical features (Koumpis and Renals, 2000), structural features (Maskey and Hirschberg, 2003), acoustic features (Inoue et al., 2004), discourse features (Zhang et al., 2007) and relevance features (Lin et al., 2009). Then, the corresponding feature vector i X of iS is taken as the input to the classifier. If the output (classification) score belongs to the positive class, iS will be selected as part of the summary; otherwise, it will be excluded (Kupiec et al., 1999). Specifically, the problem can be formulated as follows: Construct a sentence ranking model that assigns a classification score (or a posterior probability) of being in the summary class to each sentence of a spoken document to be summarized; important sentences are subsequently ranked and selected according to these scores. To this end, several popular machine-learning methods could be utilized, like Bayesian classifier (BC) (Kupiec et al., 1999), Gaussian mixture model (GMM) (Fattah and Ren, 2009) , hidden Markov model (HMM) (Conroy and O'leary, 2001), support vector machine (SVM) (Kolcz et al., 2001), maximum entropy (ME) (Ferrier, 2001), conditional random field (CRF) (Galley, 2006; Shen et al., 2007), to name a few. Although such supervised summarizers are effective, most of them (except CRF) usually implicitly assume that sentences are independent of each other (the so-called “bag-of-sentences” assumption) and classify each sentence individually without leveraging the relationship among the sentences (Shen et al., 2007). Another major shortcoming of these summarizers is that a set of handcrafted document-reference summary exemplars are required for training the summarizers; however, such summarizers tend to limit their generalization capability and might not be readily applicable for new tasks or domains. 2.2 Unsupervised summarizers The related work conducted along this direction usually relies on some heuristic rules or statistical evidences between each sentence and the document, avoiding the need of manually labeled training data. For example, the vector space model (VSM) approach represents each sentence of a document and the document itself in vector space (Gong and Liu, 2001), and computes the relevance score between each sentence and the document (e.g., the cosine measure of the simi80 larity between two vectors). Then, the sentences with the highest relevance scores are included in the summary. A natural extension is to represent each document or each sentence vector in a latent semantic space (Gong and Liu, 2001), instead of simply using the literal term information as that done by VSM. On the other hand, the graph-based methods, such as TextRank (Mihalcea and Tarau, 2005) and LexRank (Erkan and Radev, 2004), conceptualize the document to be summarized as a network of sentences, where each node represents a sentence and the associated weight of each link represents the lexical or topical similarity relationship between a pair of nodes. Document summarization thus relies on the global structural information conveyed by such conceptualized network, rather than merely considering the local features of each node (sentence). However, due to the lack of documentsummary reference pairs, the performance of the unsupervised summarizers is usually worse than that of the supervised summarizers. Moreover, most of the unsupervised summarizers are constructed solely on the basis of the lexical information without considering other sources of information cues like discourse features, acoustic features, and so forth. 3 A risk minimization framework for extractive summarization Extractive summarization can be viewed as a decision making process in which the summarizer attempts to select a representative subset of sentences or paragraphs from the original documents. Among the several analytical methods that can be employed for the decision process, the Bayes decision theory, which quantifies the tradeoff between various decisions and the potential cost that accompanies each decision, is perhaps the most suited one that can be used to guide the summarizer in choosing a course of action in the face of some uncertainties underlying the decision process (Berger, 1985). Stated formally, a decision problem may consist of four basic elements: 1) an observation O from a random variable O , 2) a set of possible decisions (or actions) Α  a , 3) the state of nature Θ   , and 4) a loss function   , ia L which specifies the cost associated with a chosen decision ia given that  is the true state of nature. The expected risk (or conditional risk) associated with taking decision ia is given by      , | θ d θ|O p ,θ a L O a R θ i i   (1) where   θ|O p is the posterior probability of the state of nature being  given the observation O . Bayes decision theory states that the optimum decision can be made by contemplating each action ia , and then choosing the action for which the expected risk is minimum:  . | min arg * O a R a i ai  (2) The notion of minimizing the Bayes risk has gained much attention and been applied with success to many natural language processing (NLP) tasks, such as automatic speech recognition (Goel and Byrne, 2000), statistical machine translation (Kumar and Byrne, 2004) and statistical information retrieval (Zhai and Lafferty, 2006). Following the same spirit, we formulate the extractive summarization task as a Bayes risk minimization problem. Without loss of generality, let us denote Π   as one of possible selection strategies (or state of nature) which comprises a set of indicators used to address the importance of each sentence iS in a document D to be summarized. A feasible selection strategy can be fairly arbitrary according to the underlying principle. For example, it could be a set of binary indicators denoting whether a sentence should be selected as part of summary or not. On the contrary, it may also be a ranked list used to address the significance of each individual sentence. Moreover, we refer to the k -th action k a as choosing the k -th selection strategy k , and the observation O as the document D to be summarized. As a result, the expected risk of a certain selection strategy k  is given by      . | , |       d D p L D R k k   (3) Consequently, the ultimate goal of extractive summarization could be stated as the search of the best selection strategy from the space of all possible selection strategies that minimizes the expected risk defined as follows:      . | , min arg | min arg *          d D p L D R k k k k    (4) Although we have described a general formulation for the extractive summarization problem on the grounds of the Bayes decision theory, we consider hereafter a special case of it where the selection strategy is represented by a binary decision vector, of which each element corresponds to a specific sentence iS in the document D and designates whether it should be selected as part of the summary or not, as the first such attempt. More concretely, we assume that the summary 81 sentences of a given document can be iteratively chosen (i.e., one at each iteration) from the document until the aggregated summary reaches a predefined target summarization ratio. It turns out that the binary vector for each possible action will have just one element equal to 1 and all others equal to zero (or the so-called “one-of-n” coding). For ease of notation, we denote the binary vector by iS when the i -th element has a value of 1. Therefore, the risk minimization framework can be reduced to     , ~ | , min arg ~ | min arg ~ ~ ~ *       D S j j i D S i D S j i i D S P S S L D S R S (5) where D~ denotes the remaining sentences that have not been selected into the summary yet (i.e., the “residual” document);   D S P j ~ | is the posterior probability of a sentence j S given D~ . According to the Bayes’ rule, we can further express   D S P j ~ | as (Chen et al., 2009)       , ~ | ~ ~ | D P S P S D P D S P j j j  (6) where   j S D P | ~ is the sentence generative probability, i.e., the likelihood of D~ being generated by j S ;  j S P is the prior probability of j S being important; and the evidence  D P ~ is the marginal probability of D~ , which can be approximated by    . | ~ ~ ~   D S m m m S P S D P D P (7) By substituting (6) and (7) into (5), we obtain the following final selection strategy for extractive summarization:         . | ~ | ~ , min arg ~ ~ ~ *       D S D S m m j j j i D S j m i S P S D P S P S D P S S L S (8) A remarkable feature of this framework lies in that a sentence to be considered as part of the summary is actually evaluated by three different fundamental factors: (1)  j S P is the sentence prior probability that addresses the importance of sentence j S itself; (2)   j S D P | ~ is the sentence generative probability that captures the degree of relevance of j S to the residual document D~ ; and (3)   j i S S L , is the loss function that characterizes the relationship between sentence iS and any other sentence j S . As we will soon see, such a framework can be regarded as a generalization of several existing summarization methods. A detailed account on the construction of these three component models in the framework will be given in the following section. 4 Proposed Methods There are many ways to construct the above mentioned three component models, i.e., the sentence generative model   j S D P | ~ , the sentence prior model  j S P , and the loss function   j i S S L , . In what follows, we will shed light on one possible attempt that can accomplish this goal elegantly. 4.1 Sentence generative model In order to estimate the sentence generative probability, we explore the language modeling (LM) approach, which has been introduced to a wide spectrum of IR tasks and demonstrated with good empirical success, to predict the sentence generative probability. In the LM approach, each sentence in a document can be simply regarded as a probabilistic generative model consisting of a unigram distribution (the so-called “bag-ofwords” assumption) for generating the document (Chen et al., 2009):     , ~ ~ , ~ D w c D w j j S w P S D P    (9) where   D w c ~ , is the number of times that index term (or word) w occurs in D~ , reflecting that w will contribute more in the calculation of   ~ j S D P if it occurs more frequently in D~ . Note that the sentence model   j S w P is simply estimated on the basis of the frequency of index term w occurring in the sentence j S with the maximum likelihood (ML) criterion. In a sense, (9) belongs to a kind of literal term matching strategy (Chen, 2009) and may suffer the problem of unreliable model estimation owing particularly to only a few sampled index terms present in the sentence (Zhai, 2008). To mitigate this potential defect, a unigram probability estimated from a general collection, which models the general distribution of words in the target language, is often used to smooth the sentence model. Interested readers may refer to (Zhai, 2008; Chen et al., 2009) for a thorough discussion on various ways to construct the sentence generative model. 4.2 Sentence prior model The sentence prior probability  j S P can be regarded as the likelihood of a sentence being important without seeing the whole document. It could be assumed uniformly distributed over sentences or estimated from a wide variety of factors, such as the lexical information, the structural information or the inherent prosodic properties of a spoken sentence. A straightforward way is to assume that the sentence prior probability  j S P is in proportion to the posterior probability of a sentence j S be82 ing included in the summary class when observing a set of indicative features j X of j S derived from such factors or other sentence importance measures (Kupiec et al., 1999). These features can be integrated in a systematic way into the proposed framework by taking the advantage of the learning capability of the supervised machine-learning methods. Specifically, the prior probability  j S P can be approximated by:         , | | | S S S S S S P X P P X P P X p S P j j j j   (10) where   S | j X P and   S | j X P are the likelihoods that a sentence j S with features j X are generated by the summary class S and the nonsummary class S , respectively; the prior probability  S P and  S P are set to be equal in this research. To estimate   S | j X P and   S | j X P , several popular supervised classifiers (or summarizers), like BC or SVM, can be leveraged for this purpose. 4.3 Loss function The loss function introduced in the proposed summarization framework is to measure the relationship between any pair of sentences. Intuitively, when a given sentence is more dissimilar from most of the other sentences, it may incur higher loss as it is taken as the representative sentence (or summary sentence) to represent the main theme embedded in the other ones. Consequently, the loss function can be built on the notion of the similarity measure. In this research, we adopt the cosine measure (Gong and Liu, 2001) to fulfill this goal. We first represent each sentence iS in vector form where each dimension specifies the weighted statistic i tz , , e.g., the product of the term frequency (TF) and inverse document frequency (IDF) scores, associated with an index term t w in sentence iS . Then, the cosine similarity between any given two sentences   j i S S , is   . , 1 2 , 1 2 , 1 , ,          T t j t T t i t T t j t i t j i z z z z S S Sim (10) The loss function is thus defined by    . , 1 , j i j i S S Sim S S L   (11) Once the sentence generative model   j S D P | ~ , the sentence prior model  j S P and the loss function   j i S S L , have been properly estimated, the summary sentences can be selected iteratively by (8) according to a predefined target summarization ratio. However, as can be seen from (8), a new summary sentence is selected without considering the redundant information that is also contained in the already selected summary sentences. To alleviate this problem, the concept of maximum marginal relevance (MMR) (Carbonell and Goldstein, 1998), which performs sentence selection iteratively by striking the balance between topic relevance and coverage, can be incorporated into the loss function:        , ' , max 1 , 1 , '                S S Sim S S Sim S S L i S j i j i Summ   (12) where Summ represents the set of sentences that have already been included into the summary and the novelty factor  is used to trade off between relevance and redundancy. 4.4 Relation to other summarization models In this subsection, we briefly illustrate the relationship between our proposed summarization framework and a few existing summarization approaches. We start by considering a special case where a 0-1 loss function is used in (8), namely, the loss function will take value 0 if the two sentences are identical, and 1 otherwise. Then, (8) can be alternatively represented by             , | ~ | ~ max arg | ~ | ~ min arg ~ ~ , ~ ~ ~ *            D S m m i i D S S S D S D S m m j j D S m i i j j m i S P S D P S P S D P S P S D P S P S D P S (13) which actually provides a natural integration of the supervised and unsupervised summarizers (Lin et al., 2009), as mentioned previously. If we further assume the prior probability  j S P is uniformly distributed, the important (or summary) sentence selection problem has now been reduced to the problem of measuring the document-likelihood   j S D P | ~ , or the relevance between the document and the sentence. Alone a similar vein, the important sentences of a document can be selected (or ranked) solely based on the prior probability  j S P with the assumption of an equal document-likelihood   j S D P | ~ . 5 Experimental setup 5.1 Data The summarization dataset used in this research is a widely used broadcast news corpus collected by the Academia Sinica and the Public Television Service Foundation of Taiwan between November 2001 and April 2003 (Wang et al., 2005). Each story contains the speech of one studio anchor, as well as several field reporters and interviewees. A subset of 205 broadcast news doc83 uments compiled between November 2001 and August 2002 was reserved for the summarization experiments. Three subjects were asked to create summaries of the 205 spoken documents for the summarization experiments as references (the gold standard) for evaluation. The summaries were generated by ranking the sentences in the reference transcript of a spoken document by importance without assigning a score to each sentence. The average Chinese character error rate (CER) obtained for the 205 spoken documents was about 35%. Since broadcast news stories often follow a relatively regular structure as compared to other speech materials like conversations, the positional information would play an important (dominant) role in extractive summarization of broadcast news stories; we, hence, chose 20 documents for which the generation of reference summaries is less correlated with the positional information (or the position of sentences) as the held-out test set to evaluate the general performance of the proposed summarization framework, and 100 documents as the development set. 5.2 Performance evaluation For the assessment of summarization performance, we adopted the widely used ROUGE measure (Lin, 2004) because of its higher correlation with human judgments. It evaluates the quality of the summarization by counting the number of overlapping units, such as N-grams, longest common subsequences or skip-bigram, between the automatic summary and a set of reference summaries. Three variants of the ROGUE measure were used to quantify the utility of the proposed method. They are, respectively, the ROUGE-1 (unigram) measure, the ROUGE-2 (bigram) measure and the ROUGE-L (longest common subsequence) measure (Lin, 2004). The summarization ratio, defined as the ratio of the number of words in the automatic (or manual) summary to that in the reference transcript of a spoken document, was set to 10% in this research. Since increasing the summary length tends to increase the chance of getting higher scores in the recall rate of the various ROUGE measures and might not always select the right number of informative words in the automatic summary as compared to the reference summary, all the experimental results reported hereafter are obtained by calculating the F-scores of these ROUGE measures, respectively (Lin, 2004). Table 1 shows the levels of agreement (the Kappa statistic and ROUGE measures) between the three subjects for important sentence ranking. They seem to reflect the fact that people may not always agree with each other in selecting the important sentences for representing a given document. 5.3 Features for supervised summarizers We take BC as the representative supervised summarizer to study in this paper. The input to BC consists of a set of 28 indicative features used to characterize a spoken sentence, including the structural features, the lexical features, the acoustic features and the relevance feature. For each kind of acoustic features, the minimum, maximum, mean, difference value and mean difference value of a spoken sentence are extracted. The difference value is defined as the difference between the minimum and maximum values of the spoken sentence, while the mean difference value is defined as the mean difference between a sentence and its previous sentence. Finally, the relevance feature (VSM score) is use to measure the degree of relevance for a sentence to the whole document (Gong and Liu, 2001). These features are outlined in Table 2, where each of them was further normalized to zero mean and unit variance. 6 Experimental results and discussions 6.1 Baseline experiments In the first set of experiments, we evaluate the baseline performance of the LM and BC summarizers (cf. Sections 4.1 and 4.2), respectively. The corresponding results are detailed in Table 3, Kappa ROGUE-1 ROUGE-2 ROUGE-L 0.400 0.600 0.532 0.527 Table 1: The agreement among the subjects for important sentence ranking for the evaluation set. Structural features 1.Duration of the current sentence 2.Position of the current sentence 3.Length of the current sentence Lexical Features 1.Number of named entities 2.Number of stop words 3.Bigram language model scores 4.Normalized bigram scores Acoustic Features 1.The 1st formant 2.The 2nd formant 3.The pitch value 4.The peak normalized crosscorrelation of pitch Relevance Feature 1.VSM score Table 2: Basic sentence features used by BC. 84 where the values in the parentheses are the associated 95% confidence intervals. It is also worth mentioning that TD denotes the summarization results obtained based on manual transcripts of the spoken documents while SD denotes the results using the speech recognition transcripts which may contain speech recognition errors and sentence boundary detection errors. In this research, sentence boundaries were determined by speech pauses. For the TD case, the acoustic features were obtained by aligning the manual transcripts to their spoken documents counterpart by performing word-level forced alignment. Furthermore, the ROGUE measures, in essence, are evaluated by counting the number of overlapping units between the automatic summary and the reference summary; the corresponding evaluation results, therefore, would be severely affected by speech recognition errors when applying the various ROUGE measures to quantify the performance of speech summarization. In order to get rid of the cofounding effect of this factor, it is assumed that the selected summary sentences can also be presented in speech form (besides text form) such that users can directly listen to the audio segments of the summary sentences to bypass the problem caused by speech recognition errors. Consequently, we can align the ASR transcripts of the summary sentences to their respective audio segments to obtain the correct (manual) transcripts for the summarization performance evaluation (i.e., for the SD case). Observing Table 3 we notice two particularities. First, there are significant performance gaps between summarization using the manual transcripts and the erroneous speech recognition transcripts. The relative performance degradations are about 15%, 34% and 23%, respectively, for ROUGE-1, ROUGE2 and ROUGE-L measures. One possible explanation is that the erroneous speech recognition transcripts of spoken sentences would probably carry wrong information and thus deviate somewhat from representing the true theme of the spoken document. Second, the supervised summarizer (i.e., BC) outperforms the unsupervised summarizer (i.e., LM). The better performance of BC can be further explained by two reasons. One is that BC is trained with the handcrafted document-summary sentence labels in the development set while LM is instead conducted in a purely unsupervised manner. Another is that BC utilizes a rich set of features to characterize a given spoken sentence while LM is constructed solely on the basis of the lexical (unigram) information. 6.2 Experiments on the proposed methods We then turn our attention to investigate the utility of several methods deduced from our proposed summarization framework. We first consider the case when a 0-1 loss function is used (cf. (13)), which just show a simple combination of BC and LM. As can be seen from the first row of Table 4, such a combination can give about 4% to 5% absolute improvements as compared to the results of BC illustrated in Table 3. It in some sense confirms the feasibility of combining the supervised and unsupervised summarizers. Moreover, we consider the use of the loss functions defined in (11) (denoted by SIM) and (12) (denoted by MMR), and the corresponding results are shown in the second and the third rows of Table 4, respectively. It can be found that Text Document (TD) Spoken Document (SD) ROGUE-1 ROUGE-2 ROUGE-L ROGUE-1 ROUGE-2 ROUGE-L BC 0.445 (0.390 - 0.504) 0.346 (0.201 - 0.415) 0.404 (0.348 - 0.468) 0.369 (0.316 - 0.426) 0.241 (0.183 - 0.302) 0.321 (0.268 - 0.378) LM 0.387 (0.302 - 0.474) 0.264 (0.168 - 0.366) 0.334 (0.251 - 0.415) 0.319 (0.274 - 0.367) 0.164 (0.115 - 0.224) 0.253 (0.215 - 0.301) Table 3: The results achieved by the BC and LM summarizers, respectively. Text Document (TD) Spoken Document (SD) Prior Loss ROGUE-1 ROUGE-2 ROUGE-L ROGUE-1 ROUGE-2 ROUGE-L BC 0-1 0.501 0.401 0.459 0.417 0.281 0.356 SIM 0.524 0.425 0.473 0.475 0.351 0.420 MMR 0.529 0.426 0.479 0.475 0.351 0.420 Uniform SIM 0.405 0.281 0.348 0.365 0.209 0.305 MMR 0.417 0.282 0.359 0.391 0.236 0.338 Table 4: The results achieved by several methods derived from the proposed summarization framework. 85 MMR delivers higher summarization performance than SIM (especially for the SD case), which in turn verifies the merit of incorporating the MMR concept into the proposed framework for extractive summarization. If we further compare the results achieved by MMR with those of BC and LM as shown in Table 3, we can find significant improvements both for the TD and SD cases. By and large, for the TD case, the proposed summarization method offers relative performance improvements of about 19%, 23% and 19%, respectively, in the ROUGE-1, ROUGE-2 and ROUGE-L measures as compared to the BC baseline; while the relative improvements are 29%, 46% and 31%, respectively, in the same measurements for the SD case. On the other hand, the performance gap between the TD and SD cases are reduced to a good extent by using the proposed summarization framework. In the next set of experiments, we simply assume the sentence prior probability  j S P defined in (8) is uniformly distributed, namely, we do not use any supervised information cue but use the lexical information only. The importance of a given sentence is thus considered from two angles: 1) the relationship between a sentence and the whole document, and 2) the relationship between the sentence and the other individual sentences. The corresponding results are illustrated in the lower part of Table 4 (denoted by Uniform). We can see that the additional consideration of the sentence-sentence relationship appears to be beneficial as compared to that only considering the document-sentence relevance information (cf. the second row of Table 3). It also gives competitive results as compared to the performance of BC (cf. the first row of Table 3) for the SD case. 6.3 Comparison with conventional summarization methods In the final set of experiments, we compare our proposed summarization methods with a few existing summarization methods that have been widely used in various summarization tasks, including LEAD, VSM, LexRank and CRF; the corresponding results are shown in Table 5. It should be noted that the LEAD-based method simply extracts the first few sentences in a document as the summary. To our surprise, CRF does not provide superior results as compared to the other summarization methods. One possible explanation is that the structural evidence of the spoken documents in the test set is not strong enough for CRF to show its advantage of modeling the local structural information among sentences. On the other hand, LexRank gives a very promising performance in spite that it only utilizes lexical information in an unsupervised manner. This somewhat reflects the importance of capturing the global relationship for the sentences in the spoken document to be summarized. As compared to the results shown in the “BC” part of Table 4, we can see that our proposed methods significantly outperform all the conventional summarization methods compared in this paper, especially for the SD case. 7 Conclusions and future work We have proposed a risk minimization framework for extractive speech summarization, which enjoys several advantages. We have also presented a simple yet effective implementation that selects the summary sentences in an iterative manner. Experimental results demonstrate that the methods deduced from such a framework can yield substantial improvements over several popular summarization methods compared in this paper. We list below some possible future extensions: 1) integrating different selection strategies, e.g., the listwise strategy that defines the loss function on all the sentences associated with a document to be summarized, into this framework, 2) exploring different modeling approaches for this framework, 3) investigating discriminative training criteria for training the component models in this framework, and 4) extending and applying the proposed framework to multidocument summarization tasks. References James O. Berger Statistical decision theory and Bayesian analysis. Springer-Verlap, 1985. Berlin Chen. 2009. Word topic models for spoken document retrieval and transcription. ACM Transactions on Asian Language Information Processing, 8, (1): 2:1 - 2:27. Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proc. of Annual International ACM SIGIR Conference on ROGUE-1 ROUGE-2 ROUGE-L LEAD TD 0.320 0.197 0.283 SD 0.312 0.168 0.251 VSM TD 0.345 0.220 0.287 SD 0.337 0.189 0.277 LexRank TD 0.435 0.314 0.377 SD 0.348 0.204 0.294 CRF TD 0.431 0.315 0.383 SD 0.358 0.220 0.291 Table 5: The results achieved by four conventional summarization methods. 86 Research and Development in Information Retrieval: 335 - 336. Yi-Ting Chen, Berlin Chen and Hsin-Min Wang. 2009. A probabilistic generative framework for extractive broadcast news speech summarization. IEEE Transactions on Audio, Speech and Language Processing, 17, (1): 95 - 106. John M. Conroy and Dianne P. O’Leary. 2001. Text summarization via hidden Markov models. In Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval: 406 - 407. Güneş Erkan and Dragomir R. Radev. 2004. LexRank: graph-based lexical centrality as salience in text summarization. Journal or Artificial Intelligence Research, 22: 457 - 479. Mohamed Abdel Fattah and Fuji Ren. 2009. GA, MR, FFNN, PNN and GMM based models for automatic text summarization. Computer Speech and Language, 23, (1): 126 - 144. Louisa Ferrier A maximum entropy approach to text summarization. School of Artificial Intelligence, University of Edinburgh, 2001. Sadaoki Furui, Tomonori Kikuchi, Yousuke Shinnaka and Chiori Hori. 2004. Speech-to-text and speechto-speech summarization of spontaneous speech. IEEE Transactions on Speech and Audio Processing, 12, (4): 401 - 408. Michel Galley. 2006. A skip-chain conditional random field for ranking meeting utterances by importance. In Proc. of Conference on Empirical Methods in Natural Language Processing: 364 - 372. Vaibhava Goel and William Byrne. 2000. Minimum Bayes-risk automatic speech recognition. Computer Speech and Language, 14, (2): 115 - 135. Yihong Gong and Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. In Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval: 19 - 25. Akira Inoue, Takayoshi Mikami and Yoichi Yamashita. 2004. Improvement of speech summarization using prosodic information, In Proc. of Speech Prosody: 599 - 602. Shankar Kumar and William Byrne. 2004. Minimum Bayes-risk decoding for statistical machine translation. In Proc. of Human Language Technology conference / North American chapter of the Association for Computational Linguistics annual meeting: 169 - 176. Aleksander Kolcz, Vidya Prabakarmurthi and Jugal Kalita. 2001. Summarization as feature selection for text categorization. In Proc. of Conference on Information and Knowledge Management: 365 - 370. Julian Kupiec, Jan Pedersen and Francine Chen. 1999. A trainable document summarizer. In Proc. of Annual International ACM SIGIR Conference on Research and Development in Information Retrieval: 68 - 73. Konstantinos Koumpis and Steve Renals. 2000. Transcription And Summarization Of Voicemail Speech. In Proc. of International Conference on Spoken Language Processing: 688 - 691. Chin-Yew Lin. 2004. ROUGE: a Package for Automatic Evaluation of Summaries. In Proc. of Workshop on Text Summarization Branches Out. Shih-Hsiang Lin, Berlin Chen and Hsin-Min Wang. 2009. A comparative study of probabilistic ranking models for Chinese spoken document summarization. ACM Transactions on Asian Language Information Processing, 8, (1): 3:1 - 3:23. Shih-Hsiang Lin, Yueng-Tien Lo, Yao-Ming Yeh and Berlin Chen. 2009. Hybrids of supervised and unsupervised models for extractive speech summarization. In Proc. of Annual Conference of the International Speech Communication Association: 1507 - 1510. Inderjeet Mani and Mark T. Maybury Advances in automatic text summarization. MIT Press, Cambridge, 1999. Sameer R. Maskey and Julia Hirschberg. 2003. Automatic Summarization of Broadcast News using Structural Features. In Proc. of the European Conf. Speech Communication and Technology: 1173 - 1176. Kathleen McKeown, Julia Hirschberg, Michel Galley and Sameer Maskey. 2005. From text to speech summarization. In Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing: 997 - 1000. Rada Mihalcea and Paul Tarau. 2005. TextRank: bringing order into texts. In Proc. of Conference on Empirical Methods in Natural Language Processing: 404 - 411. Dragomir R. Radev, Hongyan Jing, Małgorzata Stys and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing and Management, 40: 919 - 938. Dou Shen, Jian-Tao Sun, Hua Li, Qiang Yang and Zheng Chen. 2007. Document summarization using conditional random fields. In Proc. of International Joint Conference on Artificial Intelligence: 2862 - 2867. Hsin-Min Wang, Berlin Chen, Jen-Wei Kuo and ShihSian Cheng. 2005. MATBN: A Mandarin Chinese broadcast news corpus. International Journal of Computational Linguistics and Chinese Language Processing, 10, (2): 219 - 236. ChengXiang Zhai and John Lafferty. 2006. A risk minimization framework for information retrieval. Information Processing & Management, 42, (1): 31 - 55. ChengXiang Zhai. Statistical language models for information retrieval. Morgan & Claypool Publishers, 2008. Justin Jian Zhang, Ho Yin Chan and Pascale Fung. 2007. Improving Lecture Speech Summarization Using Rhetorical Information. In Proc. of Workshop of Automatic Speech Recognition Understanding: 195 - 200. 87
2010
9
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 875–885, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Convolution Kernel over Packed Parse Forest Min Zhang Hui Zhang Haizhou Li Institute for Infocomm Research A-STAR, Singapore {mzhang,vishz,hli}@i2r.a-star.edu.sg Abstract This paper proposes a convolution forest kernel to effectively explore rich structured features embedded in a packed parse forest. As opposed to the convolution tree kernel, the proposed forest kernel does not have to commit to a single best parse tree, is thus able to explore very large object spaces and much more structured features embedded in a forest. This makes the proposed kernel more robust against parsing errors and data sparseness issues than the convolution tree kernel. The paper presents the formal definition of convolution forest kernel and also illustrates the computing algorithm to fast compute the proposed convolution forest kernel. Experimental results on two NLP applications, relation extraction and semantic role labeling, show that the proposed forest kernel significantly outperforms the baseline of the convolution tree kernel. 1 Introduction Parse tree and packed forest of parse trees are two widely used data structures to represent the syntactic structure information of sentences in natural language processing (NLP). The structured features embedded in a parse tree have been well explored together with different machine learning algorithms and proven very useful in many NLP applications (Collins and Duffy, 2002; Moschitti, 2004; Zhang et al., 2007). A forest (Tomita, 1987) compactly encodes an exponential number of parse trees. In this paper, we study how to effectively explore structured features embedded in a forest using convolution kernel (Haussler, 1999). As we know, feature-based machine learning methods are less effective in modeling highly structured objects (Vapnik, 1998), such as parse tree or semantic graph in NLP. This is due to the fact that it is usually very hard to represent structured objects using vectors of reasonable dimensions without losing too much information. For example, it is computationally infeasible to enumerate all subtree features (using subtree a feature) for a parse tree into a linear feature vector. Kernel-based machine learning method is a good way to overcome this problem. Kernel methods employ a kernel function, that must satisfy the properties of being symmetric and positive, to measure the similarity between two objects by computing implicitly the dot product of certain features of the input objects in high (or even infinite) dimensional feature spaces without enumerating all the features (Vapnik, 1998). Many learning algorithms, such as SVM (Vapnik, 1998), the Perceptron learning algorithm (Rosenblatt, 1962) and Voted Perceptron (Freund and Schapire, 1999), can work directly with kernels by replacing the dot product with a particular kernel function. This nice property of kernel methods, that implicitly calculates the dot product in a high-dimensional space over the original representations of objects, has made kernel methods an effective solution to modeling structured objects in NLP. In the context of parse tree, convolution tree kernel (Collins and Duffy, 2002) defines a feature space consisting of all subtree types of parse trees and counts the number of common subtrees as the syntactic similarity between two parse trees. The tree kernel has shown much success in many NLP applications like parsing (Collins and Duffy, 2002), semantic role labeling (Moschitti, 2004; Zhang et al., 2007), relation extraction (Zhang et al., 2006), pronoun resolution (Yang et al., 2006), question classification (Zhang and Lee, 2003) and machine translation (Zhang and Li, 2009), where the tree kernel is used to compute the similarity between two NLP application instances that are usually represented by parse trees. However, in those studies, the tree kernel only covers the features derived from single 1875 best parse tree. This may largely compromise the performance of tree kernel due to parsing errors and data sparseness. To address the above issues, this paper constructs a forest-based convolution kernel to mine structured features directly from packed forest. A packet forest compactly encodes exponential number of n-best parse trees, and thus containing much more rich structured features than a single parse tree. This advantage enables the forest kernel not only to be more robust against parsing errors, but also to be able to learn more reliable feature values and help to solve the data sparseness issue that exists in the traditional tree kernel. We evaluate the proposed kernel in two real NLP applications, relation extraction and semantic role labeling. Experimental results on the benchmark data show that the forest kernel significantly outperforms the tree kernel. The rest of the paper is organized as follows. Section 2 reviews the convolution tree kernel while section 3 discusses the proposed forest kernel in details. Experimental results are reported in section 4. Finally, we conclude the paper in section 5. 2 Convolution Kernel over Parse Tree Convolution kernel was proposed as a concept of kernels for discrete structures by Haussler (1999) and related but independently conceived ideas on string kernels first presented in (Watkins, 1999). The framework defines the kernel function between input objects as the convolution of “subkernels”, i.e. the kernels for the decompositions (parts) of the input objects. The parse tree kernel (Collins and Duffy, 2002) is an instantiation of convolution kernel over syntactic parse trees. Given a parse tree, its features defined by a tree kernel are all of its subtree types and the value of a given feature is the number of the occurrences of the subtree in the parse tree. Fig. 1 illustrates a parse tree with all of its 11 subtree features covered by the convolution tree kernel. In the tree kernel, a parse tree T is represented by a vector of integer counts of each subtree type (i.e., subtree regardless of its ancestors, descendants and span covered): ( ) T  (# subtreetype1(T), …, # subtreetypen(T)) where # subtreetypei(T) is the occurrence number of the ith subtree type in T. The tree kernel counts the number of common subtrees as the syntactic similarity between two parse trees. Since the number of subtrees is exponential with the tree size, it is computationally infeasible to directly use the feature vector ( ) T  . To solve this computational issue, Collins and Duffy (2002) proposed the following tree kernel to calculate the dot product between the above high dimensional vectors implicitly. 1 1 2 2 1 1 2 2 1 2 1 2 1 2 1 2 1 2 ( , ) ( ), ( ) # ( ) # ( ) ( ) ( ) ( , ) i i i i i subtree subtree i n N n N n N n N K T T T T subtreetype T subtreetype T I n I n n n                             where N1 and N2 are the sets of nodes in trees T1 and T2, respectively, and ( ) i subtree I n is a function that is 1 iff the subtreetypei occurs with root at node n and zero otherwise, and 1 2 ( , ) n n  is the number of the common subtrees rooted at n1 and n2, i.e., 1 2 1 2 ( , ) ( ) ( ) i i subtree subtree i n n I n I n     1 2 ( , ) n n  can be computed by the following recursive rules: IN in the bank DT NN PP IN the bank DT NN PP IN in bank DT NN PP IN in the DT NN PP IN in DT NN PP IN the DT PP NN IN bank DT NN PP IN DT NN PP IN in the bank DT NN IN in the bank DT NN PP Figure 1. A parse tree and its 11 subtree features covered by convolution tree kernel 876 Rule 1: if the productions (CFG rules) at 1n and 2n are different, 1 2 ( , ) 0 n n   ; Rule 2: else if both 1n and 2n are pre-terminals (POS tags), 1 2 ( , ) 1 n n    ; Rule 3: else, 1 ( ) 1 2 1 2 1 ( , ) (1 ( ( , ), ( , ))) nc n j n n ch n j ch n j        , where 1 ( ) nc n is the child number of 1n , ch(n,j) is the jth child of node n and(0<≤1) is the decay factor in order to make the kernel value less variable with respect to the subtree sizes (Collins and Duffy, 2002). The recursive Rule 3 holds because given two nodes with the same children, one can construct common subtrees using these children and common subtrees of further offspring. The time complexity for computing this kernel is 1 2 (| | | |) O N N  . As discussed in previous section, when convolution tree kernel is applied to NLP applications, its performance is vulnerable to the errors from the single parse tree and data sparseness. In this paper, we present a convolution kernel over packed forest to address the above issues by exploring structured features embedded in a forest. 3 Convolution Kernel over Forest In this section, we first illustrate the concept of packed forest and then give a detailed discussion on the covered feature space, fractional count, feature value and the forest kernel function itself. 3.1 Packed forest of parse trees Informally, a packed parse forest, or (packed) forest in short, is a compact representation of all the derivations (i.e. parse trees) for a given sentence under context-free grammar (Tomita, 1987; Billot and Lang, 1989; Klein and Manning, 2001). It is the core data structure used in natural language parsing and other downstream NLP applications, such as syntax-based machine translation (Zhang et al., 2008; Zhang et al., 2009a). In parsing, a sentence corresponds to exponential number of parse trees with different tree probabilities, where a forest can compact all the parse trees by sharing their common subtrees in a bottom-up manner. Formally, a packed forest 𝐹 can be described as a triple: 𝐹= < 𝑉, 𝐸, 𝑆> where 𝑉is the set of non-terminal nodes, 𝐸 is the set of hyper-edges and 𝑆 is a sentence NNP[1,1] VV[2,2] NN[4,4] IN[5,5] John saw a man NP[3,4] in the bank DT[3,3] DT[6,6] NN[7,7] PP[5,7] VP[2,4] NP[3,7] VP[2,7] IP[1,7] NNP VV NN IN DT NN John saw a man in the bank DT VP NP VP IP PP NNP VV NN IN DT NN John saw a man in the bank DT NP NP VP IP PP IP[1,7] VP[2,7] NNP[1,1] a) A Forest f b) A Hyper-edge e c) A Parse Tree T1 d) A Parse Tree T2 Figure 2. An example of a packed forest, a hyper-edge and two parse trees covered by the packed forest 877 represented as an ordered word sequence. A hyper-edge 𝑒 is a group of edges in a parse tree which connects a father node and its all child nodes, representing a CFG rule. A non-terminal node in a forest is represented as a “label [start, end]”, where the “label” is its syntax category and “[start, end]” is the span of words it covers. As shown in Fig. 2, these two parse trees (𝑇1 and 𝑇2) can be represented as a single forest by sharing their common subtrees (such as NP[3,4] and PP[5,7]) and merging common non-terminal nodes covering the same span (such as VP[2,7], where there are two hyper-edges attach to it). Given the definition of forest, we introduce the concepts of inside probability β . and outside probability α(. ) that are widely-used in parsing (Baker, 1979; Lari and Young, 1990) and are also to be used in our kernel calculation. β 𝑣 𝑝, 𝑝 = 𝑃(𝑣→𝑆[𝑝]) β 𝑣 𝑝, 𝑞 = 𝑃 𝑒 𝑒 𝑖𝑠 𝑎 𝑕𝑦𝑝𝑒𝑟−𝑒𝑑𝑔𝑒 𝑎𝑡𝑡𝑎𝑐𝑕𝑒𝑑 𝑡𝑜 𝑣 ∙ 𝛽(𝑐𝑖[𝑝𝑖, 𝑞𝑖]) 𝑐𝑖 𝑝𝑖,𝑞𝑖 𝑖𝑠 𝑎 𝑙𝑒𝑎𝑓 𝑛𝑜𝑑𝑒 𝑜𝑓 𝑒 α 𝑟𝑜𝑜𝑡(𝑓) = 1 α 𝑣 𝑝, 𝑞 = α 𝑟𝑜𝑜𝑡 𝑒 ∙𝑃 𝑒 𝑒 𝑖𝑠 𝑎 𝑕𝑦𝑝𝑒𝑟− 𝑒𝑑𝑔𝑒 𝑎𝑛𝑑 𝑣 𝑖𝑠 𝑖𝑡𝑠 𝑜𝑛𝑒 𝑙𝑒𝑎𝑓 𝑛𝑜𝑑𝑒 ∙ 𝛽(𝑐𝑖[𝑝𝑖, 𝑞𝑖])) 𝑐𝑖 𝑝𝑖,𝑞𝑖 𝑖𝑠 𝑎 𝑐𝑕𝑖𝑙𝑑𝑟𝑒𝑛 𝑛𝑜𝑑𝑒 𝑜𝑓 𝑒 𝑒𝑥𝑐𝑒𝑝𝑡 𝑣 where 𝑣 is a forest node, 𝑆[𝑝] is the 𝑝𝑡𝑕 word of input sentence 𝑆, 𝑃(𝑣→𝑆[𝑝]) is the probability of the CFG rule 𝑣→𝑆[𝑝], 𝑟𝑜𝑜𝑡(. ) returns the root node of input structure, [𝑝𝑖, 𝑞𝑖] is a sub-span of 𝑝, 𝑞 , being covered by 𝑐𝑖, and 𝑃 𝑒 is the PCFG probability of 𝑒. From these definitions, we can see that the inside probability is total probability of generating words 𝑆 𝑝, 𝑞 from non-terminal node 𝑣 𝑝, 𝑞 while the outside probability is the total probability of generating node 𝑣 𝑝, 𝑞 and words outside 𝑆[𝑝, 𝑞] from the root of forest. The inside probability can be calculated using dynamic programming in a bottomup fashion while the outside probability can be calculated using dynamic programming in a topto-down way. 3.2 Convolution forest kernel In this subsection, we first define the feature space covered by forest kernel, and then define the forest kernel function. 3.2.1 Feature space, object space and feature value The forest kernel counts the number of common subtrees as the syntactic similarity between two forests. Therefore, in the same way as tree kernel, its feature space is also defined as all the possible subtree types that a CFG grammar allows. In a forest kernel, forest 𝐹 is represented by a vector of fractional counts of each subtree type (subtree regardless of its ancestors, descendants and span covered): ( ) F  (# subtreetype1(F), …, # subtreetypen(F)) = (#subtreetype1(n-best parse trees), …, (1) # subtreetypen(n-best parse trees)) where # subtreetypei(F) is the occurrence number of the ith subtree type (subtreetypei) in forest F, i.e., a n-best parse tree lists with a huge n. Although the feature spaces of the two kernels are the same, their object spaces (tree vs. forest) and feature values (integer counts vs. fractional counts) differ very much. A forest encodes exponential number of parse trees, and thus containing exponential times more subtrees than a single parse tree. This ensures forest kernel to learn more reliable feature values and is also able to help to address the data sparseness issues in a better way than tree kernel does. Forest kernel is also expected to yield more non-zero feature values than tree kernel. Furthermore, different parse tree in a forest represents different derivation and interpretation for a given sentence. Therefore, forest kernel should be more robust to parsing errors than tree kernel. In tree kernel, one occurrence of a subtree contributes 1 to the value of its corresponding feature (subtree type), so the feature value is an integer count. However, the case turns out very complicated in forest kernel. In a forest, each of its parse trees, when enumerated, has its own 878 probability. So one subtree extracted from different parse trees should have different fractional count with regard to the probabilities of different parse trees. Following the previous work (Charniak and Johnson, 2005; Huang, 2008), we define the fractional count of the occurrence of a subtree in a parse tree 𝑡𝑖 as 𝑐 𝑠𝑢𝑏𝑡𝑟𝑒𝑒, 𝑡𝑖 = 0 𝑖𝑓 𝑠𝑢𝑏𝑡𝑟𝑒𝑒∉𝑡𝑖 𝑃 𝑠𝑢𝑏𝑡𝑟𝑒𝑒, 𝑡𝑖|𝑓, 𝑠 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒 = 0 𝑖𝑓 𝑠𝑢𝑏𝑡𝑟𝑒𝑒∉𝑡𝑖 𝑃 𝑡𝑖|𝑓, 𝑠 𝑜𝑡𝑕𝑒𝑟𝑤𝑖𝑠𝑒 where we have 𝑃 𝑠𝑢𝑏𝑡𝑟𝑒𝑒, 𝑡𝑖|𝑓, 𝑠 = 𝑃 𝑡𝑖|𝑓, 𝑠 if 𝑠𝑢𝑏𝑡𝑟𝑒𝑒∈𝑡𝑖. Then we define the fractional count of the occurrence of a subtree in a forest f as 𝑐 𝑠𝑢𝑏𝑡𝑟𝑒𝑒, 𝑓 = 𝑃 𝑠𝑢𝑏𝑡𝑟𝑒𝑒|𝑓, 𝑠 = 𝑃 𝑠𝑢𝑏𝑡𝑟𝑒𝑒, 𝑡𝑖|𝑓, 𝑠 𝑡𝑖 (2) = 𝐼𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑡𝑖 ∙𝑃 𝑡𝑖|𝑓, 𝑠 𝑡𝑖 where 𝐼𝑠𝑢𝑏𝑡𝑟𝑒𝑒 𝑡𝑖 is a binary function that is 1 iif the 𝑠𝑢𝑏𝑡𝑟𝑒𝑒∈𝑡𝑖 and zero otherwise. Obviously, it needs exponential time to compute the above fractional counts. However, due to the property of forest that compactly represents all the parse trees, the posterior probability of a subtree in a forest, 𝑃 𝑠𝑢𝑏𝑡𝑟𝑒𝑒|𝑓, 𝑠 , can be easily computed in an Inside-Outside fashion as the product of three parts: the outside probability of its root node, the probabilities of parse hyperedges involved in the subtree, and the inside probabilities of its leaf nodes (Lari and Young, 1990; Mi and Huang, 2008). 𝑐 𝑠𝑢𝑏𝑡𝑟𝑒𝑒, 𝑓 = 𝑃 𝑠𝑢𝑏𝑡𝑟𝑒𝑒|𝑓, 𝑠 (3) = 𝛼𝛽(𝑠𝑢𝑏𝑡𝑟𝑒𝑒) 𝛼𝛽(𝑟𝑜𝑜𝑡 𝑓 ) where 𝛼𝛽 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 = 𝛼 𝑟𝑜𝑜𝑡 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 (4) ∙ 𝑃 𝑒 𝑒∈𝑠𝑢𝑏𝑡𝑟𝑒𝑒 ∙ 𝛽 𝑣 𝑣∈𝑙𝑒𝑎𝑓 𝑠𝑢𝑏𝑡𝑟𝑒𝑒 and 𝛼𝛽 𝑟𝑜𝑜𝑡 𝑓 = 𝛼 𝑟𝑜𝑜𝑡 𝑓 ∙𝛽 𝑟𝑜𝑜𝑡 𝑓 = 𝛽 𝑟𝑜𝑜𝑡 𝑓 where 𝛼 . and 𝛽(. ) denote the outside and inside probabilities. They can be easily obtained using the equations introduced at section 3.1. Given a subtree, we can easily compute its fractional count (i.e. its feature value) directly using eq. (3) and (4) without the need of enumerating each parse trees as shown at eq. (2) 1. Nonetheless, it is still computationally infeasible to directly use the feature vector 𝜙(𝐹) (see eq. (1)) by explicitly enumerating all subtrees although its fractional count is easily calculated. In the next subsection, we present the forest kernel that implicitly calculates the dot-product between two 𝜙(𝐹)s in a polynomial time. 3.2.2 Convolution forest kernel The forest kernel counts the fractional numbers of common subtrees as the syntactic similarity between two forests. We define the forest kernel function 𝐾𝑓 𝑓1, 𝑓2 in the following way. 𝐾𝑓 𝑓1, 𝑓2 =< 𝜙 𝑓1 , 𝜙 𝑓2 > (5) = #𝑠𝑢𝑏𝑡𝑟𝑒𝑒𝑡𝑦𝑝𝑒𝑖(𝑓1). #𝑠𝑢𝑏𝑡𝑟𝑒𝑒𝑡𝑦𝑝𝑒𝑖(𝑓2) 𝑖 = 𝐼𝑒𝑞 𝑠𝑢𝑏𝑡𝑟𝑒𝑒1, 𝑠𝑢𝑏𝑡𝑟𝑒𝑒2 𝑠𝑢𝑏𝑡𝑟𝑒𝑒1∈𝑓1 𝑠𝑢𝑏𝑡𝑟𝑒𝑒2∈𝑓2 ∙𝑐 𝑠𝑢𝑏𝑡𝑟𝑒𝑒1, 𝑓1 ∙𝑐 𝑠𝑢𝑏𝑡𝑟𝑒𝑒2, 𝑓2 = Δ′ 𝑣1, 𝑣2 𝑣2∈𝑁2 𝑣1∈𝑁1 where  𝐼𝑒𝑞 ∙,∙ is a binary function that is 1 iif the input two subtrees are identical (i.e. they have the same typology and node labels) and zero otherwise;  𝑐 ∙,∙ is the fractional count defined at eq. (3);  𝑁1 and 𝑁2 are the sets of nodes in forests 𝑓1 and 𝑓2;  Δ′ 𝑣1, 𝑣2 returns the accumulated value of products between each two fractional counts of the common subtrees rooted at 𝑣1 and 𝑣2, i.e., Δ′ 𝑣1, 𝑣2 = 𝐼𝑒𝑞 𝑠𝑢𝑏𝑡𝑟𝑒𝑒1, 𝑠𝑢𝑏𝑡𝑟𝑒𝑒2 𝑟𝑜𝑜𝑡 𝑠𝑢𝑏𝑡𝑟𝑒𝑒1 =𝑣1 𝑟𝑜𝑜𝑡 𝑠𝑢𝑏𝑡𝑟𝑒𝑒2 =𝑣2 ∙𝑐 𝑠𝑢𝑏𝑡𝑟𝑒𝑒1, 𝑓1 ∙𝑐 𝑠𝑢𝑏𝑡𝑟𝑒𝑒2, 𝑓2 1 It has been proven in parsing literatures (Baker, 1979; Lari and Young, 1990) that eq. (3) defined by Inside-Outside probabilities is exactly to compute the sum of those parse tree probabilities that cover the subtree of being considered as defined at eq. (2). 879 We next show that Δ′ 𝑣1, 𝑣2 can be computed recursively in a polynomial time as illustrated at Algorithm 1. To facilitate discussion, we temporarily ignore all fractional counts in Algorithm 1. Indeed, Algorithm 1 can be viewed as a natural extension of convolution kernel from over tree to over forest. In forest2, a node can root multiple hyper-edges and each hyper-edge is independent to each other. Therefore, Algorithm 1 iterates each hyper-edge pairs with roots at 𝑣1 and 𝑣2 (line 3-4), and sums over (eq. (7) at line 9) each recursively-accumulated sub-kernel scores of subtree pairs extended from the hyper-edge pair 𝑒1, 𝑒2 (eq. (6) at line 8). Eq. (7) holds because the hyper-edges attached to the same node are independent to each other. Eq. (6) is very similar to the Rule 3 of tree kernel (see section 2) except its inputs are hyper-edges and its further expansion is based on forest nodes. Similar to tree kernel (Collins and Duffy, 2002), eq. (6) holds because a common subtree by extending from (𝑒1, 𝑒2) can be formed by taking the hyper-edge (𝑒1, 𝑒2), together with a choice at each of their leaf nodes of simply taking the non-terminal at the leaf node, or any one of the common subtrees with root at the leaf node. Thus there are 1 + Δ′ 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 , 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 possible choices at the jth leaf node. In total, there are Δ′′ 𝑒1, 𝑒2 (eq. (6)) common subtrees by extending from (𝑒1, 𝑒2) and Δ′ 𝑣1, 𝑣2 (eq. (7)) common subtrees with root at 𝑣1, 𝑣2 . Obviously Δ′ 𝑣1, 𝑣2 calculated by Algorithm 1 is a proper convolution kernel since it simply counts the number of common subtrees under the root 𝑣1, 𝑣2 . Therefore, 𝐾𝑓 𝑓1, 𝑓2 defined at eq. (5) and calculated through Δ′ 𝑣1, 𝑣2 is also a proper convolution kernel. From eq. (5) and Algorithm 1, we can see that each hyper-edge pair (𝑒1, 𝑒2) is only visited at most one time in computing the forest kernel. Thus the time complexity for computing 𝐾𝑓 𝑓1, 𝑓2 is 𝑂(|𝐸1| ∙|𝐸2|) , where 𝐸1 and 𝐸2 are the set of hyper-edges in forests 𝑓1 and 𝑓2 , respectively. Given a forest and the best parse trees, the number of hyperedges is only several times (normally <=3 after pruning) than that of tree nodes in the parse tree3. 2 Tree can be viewed as a special case of forest with only one hyper-edge attached to each tree node. 3 Suppose there are K forest nodes in a forest, each node has M associated hyper-edges fan out and each hyper-edge has N children. Then the forest is capable of encoding 𝑀 𝐾−1 𝑁−1 parse trees at most (Zhang et al., 2009b). Same as tree kernel, forest kernel is running more efficiently in practice since only two nodes with the same label needs to be further processed (line 2 of Algorithm 1). Now let us see how to integrate fractional counts into forest kernel. According to Algorithm 1 (eq. (7)), we have (𝑒1/𝑒2 are attached to 𝑣1/𝑣2, respectively) Δ′ 𝑣1, 𝑣2 = Δ′′ 𝑒1, 𝑒2 𝑒1=𝑒2 Recall eq. (4), a fractional count consists of outside, inside and subtree probabilities. It is more straightforward to incorporate the outside and subtree probabilities since all the subtrees with roots at 𝑣1, 𝑣2 share the same outside probability and each hyper-edge pair is only visited one time. Thus we can integrate the two probabilities into Δ′ 𝑣1, 𝑣2 as follows. Δ′ 𝑣1, 𝑣2 = 𝜆∙𝛼 𝑣1 ∙𝛼 𝑣2 ∙ 𝑃 𝑒1 ∙𝑃 𝑒2 ∙Δ′′ 𝑒1, 𝑒2 𝑒1=𝑒2 (8) where, following tree kernel, a decay factor 𝜆(0 < 𝜆≤1) is also introduced in order to make the kernel value less variable with respect to the subtree sizes (Collins and Duffy, 2002). It functions like multiplying each feature value by 𝜆𝑠𝑖𝑧𝑒𝑖, where 𝑠𝑖𝑧𝑒𝑖 is the number of hyper-edges in 𝑠𝑢𝑏𝑡𝑟𝑒𝑒𝑖. Algorithm 1. Input: 𝑓1, 𝑓2: two packed forests 𝑣1, 𝑣2: any two nodes of 𝑓1 and 𝑓2 Notation: 𝐼𝑒𝑞 ∙,∙ : defined at eq. (5) 𝑛𝑙 𝑒1 : number of leaf node of 𝑒1 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 : the jth leaf node of 𝑒1 Output: Δ′ 𝑣1, 𝑣2 1. Δ′ 𝑣1, 𝑣2 = 0 2. if 𝑣1. 𝑙𝑎𝑏𝑒𝑙≠𝑣2. 𝑙𝑎𝑏𝑒𝑙 exit 3. for each hyper-edge 𝑒1 attached to 𝑣1 do 4. for each hyper-edge 𝑒2 attached to 𝑣2 do 5. if 𝐼𝑒𝑞 𝑒1, 𝑒2 == 0 do 6. goto line 3 7. else do 8. Δ′′ 𝑒1, 𝑒2 = 1 + 𝑛𝑙 𝑒1 𝑗=1 Δ′ 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 , 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 (6) 9. Δ′ 𝑣1, 𝑣2 += Δ′′ 𝑒1, 𝑒2 (7) 10. end if 11. end for 12. end for 880 The inside probability is only involved when a node does not need to be further expanded. The integer 1 at eq. (6) represents such case. So the inside probability is integrated into eq. (6) by replacing the integer 1 as follows. Δ′′ 𝑒1, 𝑒2 = 𝛽 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 ∙𝛽 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 𝑛𝑙 𝑒1 𝑗=1 + Δ′ 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 , 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 𝛼 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 ∙𝛼 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 (9) where in the last expression the two outside probabilities 𝛼 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 and 𝛼 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 are removed. This is because 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 and 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 are not roots of the subtrees of being explored (only outside probabilities of the root of a subtree should be counted in its fractional count), and Δ′ 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 , 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 already contains the two outside probabilities of 𝑙𝑒𝑎𝑓 𝑒1, 𝑗 and 𝑙𝑒𝑎𝑓 𝑒2, 𝑗 . Referring to eq. (3), each fractional count needs to be normalized by 𝛼𝛽(𝑟𝑜𝑜𝑡 𝑓 ). Since 𝛼𝛽(𝑟𝑜𝑜𝑡 𝑓 ) is independent to each individual fractional count, we do the normalization outside the recursive function Δ′′ 𝑒1, 𝑒2 . Then we can re-formulize eq. (5) as 𝐾𝑓 𝑓1, 𝑓2 =< 𝜙 𝑓1 , 𝜙 𝑓2 > = Δ′ 𝑣1, 𝑣2 𝑣2∈𝑁2 𝑣1∈𝑁1 𝛼𝛽 𝑟𝑜𝑜𝑡 𝑓1 ∙𝛼𝛽 𝑟𝑜𝑜𝑡 𝑓2 (10) Finally, since the size of input forests is not constant, the forest kernel value is normalized using the following equation. 𝐾 𝑓 𝑓1, 𝑓2 = 𝐾𝑓 𝑓1, 𝑓2 𝐾𝑓 𝑓1, 𝑓1 ∙𝐾𝑓 𝑓2, 𝑓2 (11) From the above discussion, we can see that the proposed forest kernel is defined together by eqs. (11), (10), (9) and (8). Thanks to the compact representation of trees in forest and the recursive nature of the kernel function, the introduction of fractional counts and normalization do not change the convolution property and the time complexity of the forest kernel. Therefore, the forest kernel 𝐾 𝑓 𝑓1, 𝑓2 is still a proper convolution kernel with quadratic time complexity. 3.3 Comparison with previous work To the best of our knowledge, this is the first work to address convolution kernel over packed parse forest. Convolution tree kernel is a special case of the proposed forest kernel. From feature exploration viewpoint, although theoretically they explore the same subtree feature spaces (defined recursively by CFG parsing rules), their feature values are different. Forest encodes exponential number of trees. So the number of subtree instances extracted from a forest is exponential number of times greater than that from its corresponding parse tree. The significant difference of the amount of subtree instances makes the parameters learned from forests more reliable and also can help to address the data sparseness issue. To some degree, forest kernel can be viewed as a tree kernel with very powerful back-off mechanism. In addition, forest kernel is much more robust against parsing errors than tree kernel. Aiolli et al. (2006; 2007) propose using Direct Acyclic Graphs (DAG) as a compact representation of tree kernel-based models. This can largely reduce the computational burden and storage requirements by sharing the common structures and feature vectors in the kernel-based model. There are a few other previous works done by generalizing convolution tree kernels (Kashima and Koyanagi, 2003; Moschitti, 2006; Zhang et al., 2007). However, all of these works limit themselves to single tree structure from modeling viewpoint in nature. From a broad viewpoint, as suggested by one reviewer of the paper, we can consider the forest kernel as an alternative solution proposed for the general problem of noisy inference pipelines (eg. speech translation by composition of FSTs, machine translation by translating over 'lattices' of segmentations (Dyer et al., 2008) or using parse tree info for downstream applications in our cases) . Following this line, Bunescu (2008) and Finkel et al. (2006) are two typical related works done in reducing cascading noisy. However, our works are not overlapped with each other as there are two totally different solutions for the same general problem. In addition, the main motivation of this paper is also different from theirs. 4 Experiments Forest kernel has a broad application potential in NLP. In this section, we verify the effectiveness of the forest kernel on two NLP applications, semantic role labeling (SRL) (Gildea, 2002) and relation extraction (RE) (ACE, 2002-2006). In our experiments, SVM (Vapnik, 1998) is selected as our classifier and the one vs. others strategy is adopted to select the one with the 881 largest margin as the final answer. In our implementation, we use the binary SVMLight (Joachims, 1998) and borrow the framework of the Tree Kernel Tools (Moschitti, 2004) to integrate our forest kernel into the SVMLight. We modify Charniak parser (Charniak, 2001) to output a packed forest. Following previous forest-based studies (Charniak and Johnson, 2005), we use the marginal probabilities of hyper-edges (i.e., the Viterbi-style inside-outside probabilities and set the pruning threshold as 8) for forest pruning. 4.1 Semantic role labeling Given a sentence and each predicate (either a target verb or a noun), SRL recognizes and maps all the constituents in the sentence into their corresponding semantic arguments (roles, e.g., A0 for Agent, A1 for Patient …) of the predicate or non-argument. We use the CoNLL-2005 shared task on Semantic Role Labeling (Carreras and Ma rquez, 2005) for the evaluation of our forest kernel method. To speed up the evaluation process, the same as Che et al. (2008), we use a subset of the entire training corpus (WSJ sections 02-05 of the entire sections 02-21) for training, section 24 for development and section 23 for test, where there are 35 roles including 7 Core (A0–A5, AA), 14 Adjunct (AM-) and 14 Reference (R-) arguments. The state-of-the-art SRL methods (Carreras and Ma rquez, 2005) use constituents as the labeling units to form the labeled arguments. Due to the errors from automatic parsing, it is impossible for all arguments to find their matching constituents in the single 1-best parse trees. Statistics on the training data shows that 9.78% of arguments have no matching constituents using the Charniak parser (Charniak, 2001), and the number increases to 11.76% when using the Collins parser (Collins, 1999). In our method, we break the limitation of 1-best parse tree and regard each span rooted by a single forest node (i.e., a subforest with one or more roots) as a candidate argument. This largely reduces the unmatched arguments from 9.78% to 1.31% after forest pruning. However, it also results in a very large amount of argument candidates that is 5.6 times as many as that from 1-best tree. Fortunately, after the pre-processing stage of argument pruning (Xue and Palmer, 2004) 4 , although the 4 We extend (Xue and Palmer, 2004)’s argument pruning algorithm from tree-based to forest-based. The algorithm is very effective. It can prune out around 90% argument candidates in parse tree-based amount of unmatched argument increases a little bit to 3.1%, its generated total candidate amount decreases substantially to only 1.31 times of that from 1-best parse tree. This clearly shows the advantages of the forest-based method over treebased in SRL. The best-reported tree kernel method for SRL 𝐾𝑕𝑦𝑏𝑟𝑖𝑑= 𝜃∙𝐾𝑝𝑎𝑡𝑕+ (1 −𝜃) ∙𝐾𝑐𝑠 (0 ≤𝜃≤ 1), proposed by Che et al. (2006)5, is adopted as our baseline kernel. We implemented the 𝐾𝑕𝑦𝑏𝑟𝑖𝑑 in tree case ( 𝐾𝑇−𝑕𝑦𝑏𝑟𝑖𝑑, using tree kernel to compute 𝐾𝑝𝑎𝑡𝑕 and 𝐾𝑐𝑠) and in forest case (𝐾𝐹−𝑕𝑦𝑏𝑟𝑖𝑑, using tree kernel to compute 𝐾𝑝𝑎𝑡𝑕 and 𝐾𝑐𝑠). Precision Recall F-Score 𝐾𝑇−𝑕𝑦𝑏𝑟𝑖𝑑 (Tree) 76.02 67.38 71.44 𝐾𝐹−𝑕𝑦𝑏𝑟𝑖𝑑 (Forest) 79.06 69.12 73.76 Table 1: Performance comparison of SRL (%) Table 1 shows that the forest kernel significantly outperforms (𝜒2 test with p=0.01) the tree kernel with an absolute improvement of 2.32 (73.7671.42) percentage in F-Score, representing a relative error rate reduction of 8.19% (2.32/(10071.64)). This convincingly demonstrates the advantage of the forest kernel over the tree kernel. It suggests that the structured features represented by subtree are very useful to SRL. The performance improvement is mainly due to the fact that forest encodes much more such structured features and the forest kernel is able to more effectively capture such structured features than the tree kernel. Besides F-Score, both precision and recall also show significantly improvement (𝜒2 test with p=0.01). The reason for recall improvement is mainly due to the lower rate of unmatched argument (3.1% only) with only a little bit overhead (1.31 times) (see the previous discussion in this section). The precision improvement is mainly attributed to fact that we use sub-forest to represent argument instances, rather than subtree used in tree kernel, where the sub-tree is only one tree encoded in the sub-forest. SRL and thus makes the amounts of positive and negative training instances (arguments) more balanced. We apply the same pruning strategies to forest plus our heuristic rules to prune out some of the arguments with span overlapped with each other and those arguments with very small inside probabilities, depending on the numbers of candidates in the span. 5 Kpath and Kcs are two standard convolution tree kernels to describe predicate-argument path substructures and argument syntactic substructures, respectively. 882 4.2 Relation extraction As a subtask of information extraction, relation extraction is to extract various semantic relations between entity pairs from text. For example, the sentence “Bill Gates is chairman and chief software architect of Microsoft Corporation” conveys the semantic relation “EMPLOYMENT.executive” between the entities “Bill Gates” (person) and “Microsoft Corporation” (company). We adopt the method reported in Zhang et al. (2006) as our baseline method as it reports the state-of-the-art performance using tree kernel-based composite kernel method for RE. We replace their tree kernels with our forest kernels and use the same experimental settings as theirs. We carry out the same five-fold cross validation experiment on the same subset of ACE 2004 data (LDC2005T09, ACE 2002-2004) as that in Zhang et al. (2006). The data contain 348 documents and 4400 relation instances. In SRL, constituents are used as the labeling units to form the labeled arguments. However, previous work (Zhang et al., 2006) shows that if we use complete constituent (MCT) as done in SRL to represent relation instance, there is a large performance drop compared with using the path-enclosed tree (PT)6. By simulating PT, we use the minimal fragment of a forest covering the two entities and their internal words to represent a relation instance by only parsing the span covering the two entities and their internal words. Precision Recall F-Score Zhang et al. (2006):Tree 68.6 59.3 6 63.6 Ours: Forest 70.3 60.0 64.7 Table 2: Performance Comparison of RE (%) over 23 subtypes on the ACE 2004 data Table 2 compares the performance of the forest kernel and the tree kernel on relation extraction. We can see that the forest kernel significantly outperforms (𝜒2 test with p=0.05) the tree kernel by 1.1 point of F-score. This further verifies the effectiveness of the forest kernel method for 6 MCT is the minimal constituent rooted by the nearest common ancestor of the two entities under consideration while PT is the minimal portion of the parse tree (may not be a complete subtree) containing the two entities and their internal lexical words. Since in many cases, the two entities and their internal words cannot form a grammatical constituent, MCT may introduce too many noisy context features and thus lead to the performance drop. modeling NLP structured data. In summary, we further observe the high precision improvement that is consistent with the SRL experiments. However, the recall improvement is not as significant as observed in SRL. This is because unlike SRL, RE has no un-matching issues in generating relation instances. Moreover, we find that the performance improvement in RE is not as good as that in SRL. Although we know that performance is task-dependent, one of the possible reasons is that SRL tends to be long-distance grammatical structure-related while RE is local and semanticrelated as observed from the two experimental benchmark data. 5 Conclusions and Future Work Many NLP applications have benefited from the success of convolution kernel over parse tree. Since a packed parse forest contains much richer structured features than a parse tree, we are motivated to develop a technology to measure the syntactic similarity between two forests. To achieve this goal, in this paper, we design a convolution kernel over packed forest by generalizing the tree kernel. We analyze the object space of the forest kernel, the fractional count for feature value computing and design a dynamic programming algorithm to realize the forest kernel with quadratic time complexity. Compared with the tree kernel, the forest kernel is more robust against parsing errors and data sparseness issues. Among the broad potential NLP applications, the problems in SRL and RE provide two pointed scenarios to verify our forest kernel. Experimental results demonstrate the effectiveness of the proposed kernel in structured NLP data modeling and the advantages over tree kernel. In the future, we would like to verify the forest kernel in more NLP applications. In addition, as suggested by one reviewer, we may consider rescaling the probabilities (exponentiating them by a constant value) that are used to compute the fractional counts. We can sharpen or flatten the distributions. This basically says "how seriously do we want to take the very best derivation" compared to the rest. However, the challenge is that we compute the fractional counts together with the forest kernel recursively by using the Inside-Outside probabilities. We cannot differentiate the individual parse tree’s contribution to a fractional count on the fly. One possible solution is to do the probability rescaling off-line before kernel calculation. This would be a very interesting research topic of our future work. 883 References ACE (2002-2006). The Automatic Content Extraction Projects. http://www.ldc.upenn.edu/Projects/ACE/ Fabio Aiolli, Giovanni Da San Martino, Alessandro Sperduti and Alessandro Moschitti. 2006. Fast Online Kernel Learning for Trees. ICDM-2006 Fabio Aiolli, Giovanni Da San Martino, Alessandro Sperduti and Alessandro Moschitti. 2007. Efficient Kernel-based Learning for Trees. IEEE Symposium on Computational Intelligence and Data Mining (CIDM-2007) J. Baker. 1979. Trainable grammars for speech recognition. The 97th meeting of the Acoustical Society of America S. Billot and S. Lang. 1989. The structure of shared forest in ambiguous parsing. ACL-1989 Razvan Bunescu. 2008. Learning with Probabilistic Features for Improved Pipeline Models. EMNLP2008 X. Carreras and Lluıs Ma rquez. 2005. Introduction to the CoNLL-2005 shared task: SRL. CoNLL-2005 E. Charniak. 2001. Immediate-head Parsing for Language Models. ACL-2001 E. Charniak and Mark Johnson. 2005. Corse-to-finegrained n-best parsing and discriminative reranking. ACL-2005 Wanxiang Che, Min Zhang, Ting Liu and Sheng Li. 2006. A hybrid convolution tree kernel for semantic role labeling. COLING-ACL-2006 (poster) WanXiang Che, Min Zhang, Aiti Aw, Chew Lim Tan, Ting Liu and Sheng Li. 2008. Using a Hybrid Convolution Tree Kernel for Semantic Role Labeling. ACM Transaction on Asian Language Information Processing M. Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. dissertation, Pennsylvania University M. Collins and N. Duffy. 2002. Convolution Kernels for Natural Language. NIPS-2002 Christopher Dyer, Smaranda Muresan and Philip Resnik. 2008. Generalizing Word Lattice Translation. ACL-HLT-2008 Jenny Rose Finkel, Christopher D. Manning and Andrew Y. Ng. 2006. Solving the Problem of Cascading Errors: Approximate Bayesian Inference for Linguistic Annotation Pipelines. EMNLP-2006 Y. Freund and R. E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277-296 D. Guldea. 2002. Probabilistic models of verbargument structure. COLING-2002 D. Haussler. 1999. Convolution Kernels on Discrete Structures. Technical Report UCS-CRL-99-10, University of California, Santa Cruz Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. ACL-2008 Karim Lari and Steve J. Young. 1990. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech and Language. 4(35–56) H. Kashima and T. Koyanagi. 2003. Kernels for SemiStructured Data. ICML-2003 Dan Klein and Christopher D. Manning. 2001. Parsing and Hypergraphs. IWPT-2001 T. Joachims. 1998. Text Categorization with Support Vecor Machine: learning with many relevant features. ECML-1998 Haitao Mi and Liang Huang. 2008. Forest-based Translation Rule Extraction. EMNLP-2008 Alessandro Moschitti. 2004. A Study on Convolution Kernels for Shallow Semantic Parsing. ACL-2004 Alessandro Moschitti. 2006. Syntactic kernels for natural language learning: the semantic role labeling case. HLT-NAACL-2006 (short paper) Martha Palmer, Dan Gildea and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics. 31(1) F. Rosenblatt. 1962. Principles of Neurodynamics: Perceptrons and the theory of brain mechanisms. Spartan Books, Washington D.C. Masaru Tomita. 1987. An Efficient AugmentedContext-Free Parsing Algorithm. Computational Linguistics 13(1-2): 31-46 Vladimir N. Vapnik. 1998. Statistical Learning Theory. Wiley C. Watkins. 1999. Dynamic alignment kernels. In A. J. Smola, B. Sch¨olkopf, P. Bartlett, and D. Schuurmans (Eds.), Advances in kernel methods. MIT Press Nianwen Xue and Martha Palmer. 2004. Calibrating features for semantic role labeling. EMNLP-2004 Xiaofeng Yang, Jian Su and Chew Lim Tan. 2006. Kernel-Based Pronoun Resolution with Structured Syntactic Knowledge. COLING-ACL-2006 Dell Zhang and W. Lee. 2003. Question classification using support vector machines. SIGIR-2003 Hui Zhang, Min Zhang, Haizhou Li, Aiti Aw and Chew Lim Tan. 2009a. Forest-based Tree Sequence to String Translation Model. ACLIJCNLP-2009 Hui Zhang, Min Zhang, Haizhou Li and Chew Lim Tan. 2009b. Fast Translation Rule Matching for 884 Syntax-based Statistical Machine Translation. EMNLP-2009 Min Zhang, Jie Zhang, Jian Su and GuoDong Zhou. 2006. A Composite Kernel to Extract Relations between Entities with Both Flat and Structured Features. COLING-ACL-2006 Min Zhang, W. Che, A. Aw, C. Tan, G. Zhou, T. Liu and S. Li. 2007. A Grammar-driven Convolution Tree Kernel for Semantic Role Classification. ACL-2007 Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan and Sheng Li. 2008. A Tree Sequence Alignment-based Tree-to-Tree Translation Model. ACL-2008 Min Zhang and Haizhou Li. 2009. Tree Kernel-based SVM with Structured Syntactic Knowledge for BTG-based Phrase Reordering. EMNLP-2009 885
2010
90
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 886–896, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Estimating Strictly Piecewise Distributions Jeffrey Heinz University of Delaware Newark, Delaware, USA [email protected] James Rogers Earlham College Richmond, Indiana, USA [email protected] Abstract Strictly Piecewise (SP) languages are a subclass of regular languages which encode certain kinds of long-distance dependencies that are found in natural languages. Like the classes in the Chomsky and Subregular hierarchies, there are many independently converging characterizations of the SP class (Rogers et al., to appear). Here we define SP distributions and show that they can be efficiently estimated from positive data. 1 Introduction Long-distance dependencies in natural language are of considerable interest. Although much attention has focused on long-distance dependencies which are beyond the expressive power of models with finitely many states (Chomsky, 1956; Joshi, 1985; Shieber, 1985; Kobele, 2006), there are some long-distance dependencies in natural language which permit finite-state characterizations. For example, although it is well-known that vowel and consonantal harmony applies across any arbitrary number of intervening segments (Ringen, 1988; Bakovi´c, 2000; Hansson, 2001; Rose and Walker, 2004) and that phonological patterns are regular (Johnson, 1972; Kaplan and Kay, 1994), it is less well-known that harmony patterns are largely characterizable by the Strictly Piecewise languages, a subregular class of languages with independently-motivated, converging characterizations (see Heinz (2007, to appear) and especially Rogers et al. (2009)). As shown by Rogers et al. (to appear), the Strictly Piecewise (SP) languages, which make distinctions on the basis of (potentially) discontiguous subsequences, are precisely analogous to the Strictly Local (SL) languages (McNaughton and Papert, 1971; Rogers and Pullum, to appear), which make distinctions on the basis of contiguous subsequences. The Strictly Local languages are the formal-language theoretic foundation for n-gram models (Garcia et al., 1990), which are widely used in natural language processing (NLP) in part because such distributions can be estimated from positive data (i.e. a corpus) (Jurafsky and Martin, 2008). N-gram models describe probability distributions over all strings on the basis of the Markov assumption (Markov, 1913): that the probability of the next symbol only depends on the previous contiguous sequence of length n −1. From the perspective of formal language theory, these distributions are perhaps properly called Strictly k-Local distributions (SLk) where k = n. It is well-known that one limitation of the Markov assumption is its inability to express any kind of long-distance dependency. This paper defines Strictly k-Piecewise (SPk) distributions and shows how they too can be efficiently estimated from positive data. In contrast with the Markov assumption, our assumption is that the probability of the next symbol is conditioned on the previous set of discontiguous subsequences of length k −1 in the string. While this suggests the model has too many parameters (one for each subset of all possible subsequences), in fact the model has on the order of |Σ|k+1 parameters because of an independence assumption: there is no interaction between different subsequences. As a result, SP distributions are efficiently computable even though they condition the probability of the next symbol on the occurrences of earlier (possibly very distant) discontiguous subsequences. Essentially, these SP distributions reflect a kind of long-term memory. On the other hand, SP models have no shortterm memory and are unable to make distinctions on the basis of contiguous subsequences. We do not intend SP models to replace n-gram models, but instead expect them to be used alongside of 886 them. Exactly how this is to be done is beyond the scope of this paper and is left for future research. Since SP languages are the analogue of SL languages, which are the formal-language theoretical foundation for n-gram models, which are widely used in NLP, it is expected that SP distributions and their estimation will also find wide application. Apart from their interest to problems in theoretical phonology such as phonotactic learning (Coleman and Pierrehumbert, 1997; Hayes and Wilson, 2008; Heinz, to appear), it is expected that their use will have application, in conjunction with n-gram models, in areas that currently use them; e.g. augmentative communication (Newell et al., 1998), part of speech tagging (Brill, 1995), and speech recognition (Jelenik, 1997). §2 provides basic mathematical notation. §3 provides relevant background on the subregular hierarchy. §4 describes automata-theoretic characterizations of SP languages. §5 defines SP distributions. §6 shows how these distributions can be efficiently estimated from positive data and provides a demonstration. §7 concludes the paper. 2 Preliminaries We start with some mostly standard notation. Σ denotes a finite set of symbols and a string over Σ is a finite sequence of symbols drawn from that set. Σk, Σ≤k, Σ≥k, and Σ∗denote all strings over this alphabet of length k, of length less than or equal to k, of length greater than or equal to k, and of any finite length, respectively. ǫ denotes the empty string. |w| denotes the length of string w. The prefixes of a string w are Pfx(w) = {v : ∃u ∈Σ∗such that vu = w}. When discussing partial functions, the notation ↑ and ↓indicates that the function is undefined, respectively is defined, for particular arguments. A language L is a subset of Σ∗. A stochastic language D is a probability distribution over Σ∗. The probability p of word w with respect to D is written PrD(w) = p. Recall that all distributions D must satisfy P w∈Σ∗PrD(w) = 1. If L is language then PrD(L) = P w∈L PrD(w). A Deterministic Finite-state Automaton (DFA) is a tuple M = ⟨Q, Σ, q0, δ, F ⟩where Q is the state set, Σ is the alphabet, q0 is the start state, δ is a deterministic transition function with domain Q × Σ and codomain Q, F is the set of accepting states. Let ˆd : Q × Σ∗→ Q be the (partial) path function of M, i.e., ˆd(q, w) is the (unique) state reachable from state q via the sequence w, if any, or ˆd(q, w)↑otherwise. The language recognized by a DFA M is L(M) def = {w ∈Σ∗| ˆd(q0, w)↓∈F}. A state is useful iff for all q ∈Q, there exists w ∈Σ∗such that δ(q0, w) = q and there exists w ∈Σ∗such that δ(q, w) ∈ F. Useless states are not useful. DFAs without useless states are trimmed. Two strings w and v over Σ are distinguished by a DFA M iff ˆd(q0, w) ̸= ˆd(q0, v). They are Nerode equivalent with respect to a language L if and only if wu ∈L ⇐⇒ vu ∈L for all u ∈Σ∗. All DFAs which recognize L must distinguish strings which are inequivalent in this sense, but no DFA recognizing L necessarily distinguishes any strings which are equivalent. Hence the number of equivalence classes of strings over Σ modulo Nerode equivalence with respect to L gives a (tight) lower bound on the number of states required to recognize L. A DFA is minimal if the size of its state set is minimal among DFAs accepting the same language. The product of n DFAs M1 . . . Mn is given by the standard construction over the state space Q1 × . . . × Qn (Hopcroft et al., 2001). A Probabilistic Deterministic Finitestate Automaton (PDFA) is a tuple M = ⟨Q, Σ, q0, δ, F, T ⟩where Q is the state set, Σ is the alphabet, q0 is the start state, δ is a deterministic transition function, F and T are the final-state and transition probabilities. In particular, T : Q × Σ →R+ and F : Q →R+ such that for all q ∈Q, F(q) + X a∈Σ T(q, a) = 1. (1) Like DFAs, for all w ∈Σ∗, there is at most one state reachable from q0. PDFAs are typically represented as labeled directed graphs as in Figure 1. A PDFA M generates a stochastic language DM. If it exists, the (unique) path for a word w = a0 . . . ak belonging to Σ∗through a PDFA is a sequence ⟨(q0, a0), (q1, a1), . . . , (qk, ak)⟩, where qi+1 = δ(qi, ai). The probability a PDFA assigns to w is obtained by multiplying the transition probabilities with the final probability along w’s path if 887 A:2/10 b:2/10 c:3/10 B:4/9 a:3/10 a:2/9 b:2/9 c:1/9 Figure 1: A picture of a PDFA with states labeled A and B. The probabilities of T and F are located to the right of the colon. it exists, and zero otherwise. PrDM(w) = k Y i=1 T(qi−1, ai−1) ! · F(qk+1) (2) if ˆd(q0, w)↓and 0 otherwise A probability distribution is regular deterministic iff there is a PDFA which generates it. The structural components of a PDFA M are its states Q, its alphabet Σ, its transitions δ, and its initial state q0. By structure of a PDFA, we mean its structural components. Each PDFA M defines a family of distributions given by the possible instantiations of T and F satisfying Equation 1. These distributions have |Q|· (|Σ| + 1) independent parameters (since for each state there are |Σ| possible transitions plus the possibility of finality.) We define the product of PDFA in terms of coemission probabilities (Vidal et al., 2005a). Definition 1 Let A be a vector of PDFAs and let |A| = n. For each 1 ≤i ≤n let Mi = ⟨Qi, Σ, q0i, δi, Fi, Ti⟩be the ith PDFA in A. The probability that σ is co-emitted from q1, . . . , qn in Q1, . . . , Qn, respectively, is CT(⟨σ, q1 . . . qn⟩) = n Y i=1 Ti(qi, σ). Similarly, the probability that a word simultaneously ends at q1 ∈Q1 . . . qn ∈Qn is CF(⟨q1 . . . qn⟩) = n Y i=1 Fi(qi). Then N A = ⟨Q, Σ, q0, δ, F, T ⟩where 1. Q, q0, and δ are defined as with DFA product. 2. For all ⟨q1 . . . qn⟩ ∈ Q, let Z(⟨q1 . . . qn⟩) = CF(⟨q1 . . . qn⟩) + X σ∈Σ CT(⟨σ, q1 . . . qn⟩) be the normalization term; and (a) let F(⟨q1 . . . qn⟩) = CF (⟨q1 ... qn⟩) Z(⟨q1 ... qn⟩) ; and (b) for all σ ∈Σ, let T(⟨q1 . . . qn⟩, σ) = CT(⟨σ, q1 ... qn⟩) Z(⟨q1 ... qn⟩) In other words, the numerators of T and F are defined to be the co-emission probabilities (Vidal et al., 2005a), and division by Z ensures that M defines a well-formed probability distribution. Statistically speaking, the co-emission product makes an independence assumption: the probability of σ being co-emitted from q1, . . . , qn is exactly what one expects if there is no interaction between the individual factors; that is, between the probabilities of σ being emitted from any qi. Also note order of product is irrelevant up to renaming of the states, and so therefore we also speak of taking the product of a set of PDFAs (as opposed to an ordered vector). Estimating regular deterministic distributions is well-studied problem (Vidal et al., 2005a; Vidal et al., 2005b; de la Higuera, in press). We limit discussion to cases when the structure of the PDFA is known. Let S be a finite sample of words drawn from a regular deterministic distribution D. The problem is to estimate parameters T and F of M so that DM approaches D. We employ the widelyadopted maximum likelihood (ML) criterion for this estimation. ( ˆT, ˆF) = argmax T,F Y w∈S PrM(w) ! (3) It is well-known that if D is generated by some PDFA M′ with the same structural components as M, then optimizing the ML estimate guarantees that DM approaches D as the size of S goes to infinity (Vidal et al., 2005a; Vidal et al., 2005b; de la Higuera, in press). The optimization problem (3) is simple for deterministic automata with known structural components. Informally, the corpus is passed through the PDFA, and the paths of each word through the corpus are tracked to obtain counts, which are then normalized by state. Let M = ⟨Q, Σ, δ, q0, F, T⟩ be the PDFA whose parameters F and T are to be estimated. For all states q ∈Q and symbols a ∈ Σ, The ML estimation of the probability of T(q, a) is obtained by dividing the number of times this transition is used in parsing the sample S by the 888 A:2 b:2 c:3 B:4 a:3 a:2 b:2 c:1 Figure 2: The automata shows the counts obtained by parsing M with sample S = {ab, bba, ǫ, cab, acb, cc}. SL SP LT PT LTT SF FO Reg MSO Prop +1 < Figure 3: Parallel Sub-regular Hierarchies. number of times state q is encountered in the parsing of S. Similarly, the ML estimation of F(q) is obtained by calculating the relative frequency of state q being final with state q being encountered in the parsing of S. For both cases, the division is normalizing; i.e. it guarantees that there is a wellformed probability distribution at each state. Figure 2 illustrates the counts obtained for a machine M with sample S = {ab, bba, ǫ, cab, acb, cc}.1 Figure 1 shows the PDFA obtained after normalizing these counts. 3 Subregular Hierarchies Within the class of regular languages there are dual hierarchies of language classes (Figure 3), one in which languages are defined in terms of their contiguous substrings (up to some length k, known as k-factors), starting with the languages that are Locally Testable in the Strict Sense (SL), and one in which languages are defined in terms of their not necessarily contiguous subsequences, starting with the languages that are Piecewise 1Technically, this acceptor is neither a simple DFA or PDFA; rather, it has been called a Frequency DFA. We do not formally define them here, see (de la Higuera, in press). Testable in the Strict Sense (SP). Each language class in these hierarchies has independently motivated, converging characterizations and each has been claimed to correspond to specific, fundamental cognitive capabilities (McNaughton and Papert, 1971; Brzozowski and Simon, 1973; Simon, 1975; Thomas, 1982; Perrin and Pin, 1986; Garc´ıa and Ruiz, 1990; Beauquier and Pin, 1991; Straubing, 1994; Garc´ıa and Ruiz, 1996; Rogers and Pullum, to appear; Kontorovich et al., 2008; Rogers et al., to appear). Languages in the weakest of these classes are defined only in terms of the set of factors (SL) or subsequences (SP) which are licensed to occur in the string (equivalently the complement of that set with respect to Σ≤k, the forbidden factors or forbidden subsequences). For example, the set containing the forbidden 2-factors {ab, ba} defines a Strictly 2-Local language which includes all strings except those with contiguous substrings {ab, ba}. Similarly since the parameters of ngram models (Jurafsky and Martin, 2008) assign probabilities to symbols given the preceding contiguous substrings up to length n −1, we say they describe Strictly n-Local distributions. These hierarchies have a very attractive modeltheoretic characterization. The Locally Testable (LT) and Piecewise Testable languages are exactly those that are definable by propositional formulae in which the atomic formulae are blocks of symbols interpreted factors (LT) or subsequences (PT) of the string. The languages that are testable in the strict sense (SL and SP) are exactly those that are definable by formulae of this sort restricted to conjunctions of negative literals. Going the other way, the languages that are definable by First-Order formulae with adjacency (successor) but not precedence (less-than) are exactly the Locally Threshold Testable (LTT) languages. The Star-Free languages are those that are First-Order definable with precedence alone (adjacency being FO definable from precedence). Finally, by extending to Monadic Second-Order formulae (with either signature, since they are MSO definable from each other), one obtains the full class of Regular languages (McNaughton and Papert, 1971; Thomas, 1982; Rogers and Pullum, to appear; Rogers et al., to appear). The relation between strings which is fundamental along the Piecewise branch is the subse889 quence relation, which is a partial order on Σ∗: w ⊑v def ⇐⇒w = ε or w = σ1 · · · σn and (∃w0, . . . , wn ∈Σ∗)[v = w0σ1w1 · · · σnwn]. in which case we say w is a subsequence of v. For w ∈Σ∗, let Pk(w) def= {v ∈Σk | v ⊑w} and P≤k(w) def = {v ∈Σ≤k | v ⊑w}, the set of subsequences of length k, respectively length no greater than k, of w. Let Pk(L) and P≤k(L) be the natural extensions of these to sets of strings. Note that P0(w) = {ε}, for all w ∈Σ∗, that P1(w) is the set of symbols occurring in w and that P≤k(L) is finite, for all L ⊆Σ∗. Similar to the Strictly Local languages, Strictly Piecewise languages are defined only in terms of the set of subsequences (up to some length k) which are licensed to occur in the string. Definition 2 (SPk Grammar, SP) A SPk grammar is a pair G = ⟨Σ, G⟩where G ⊆Σk. The language licensed by a SPk grammar is L(G) def= {w ∈Σ∗| P≤k(w) ⊆P≤k(G)}. A language is SPk iff it is L(G) for some SPk grammar G. It is SP iff it is SPk for some k. This paper is primarily concerned with estimating Strictly Piecewise distributions, but first we examine in greater detail properties of SP languages, in particular DFA representations. 4 DFA representations of SP Languages Following Sakarovitch and Simon (1983), Lothaire (1997) and Kontorovich, et al. (2008), we call the set of strings that contain w as a subsequence the principal shuffle ideal2 of w: SI(w) = {v ∈Σ∗| w ⊑v}. The shuffle ideal of a set of strings is defined as SI(S) = ∪w∈SSI(w) Rogers et al. (to appear) establish that the SP languages have a variety of characteristic properties. Theorem 1 The following are equivalent:3 2Properly SI(w) is the principal ideal generated by {w} wrt the inverse of ⊑. 3For a complete proof, see Rogers et al. (to appear). We only note that 5 implies 1 by DeMorgan’s theorem and the fact that every shuffle ideal is finitely generated (see also Lothaire (1997)). 1 b c 2 a b c Figure 4: The DFA representation of SI(aa). 1. L = T w∈S[SI(w)], S finite, 2. L ∈SP 3. (∃k)[P≤k(w) ⊆P≤k(L) ⇒w ∈L], 4. w ∈L and v ⊑w ⇒v ∈L (L is subsequence closed), 5. L = SI(X), X ⊆Σ∗(L is the complement of a shuffle ideal). The DFA representation of the complement of a shuffle ideal is especially important. Lemma 1 Let w ∈ Σk, w = σ1 · · · σk, and MSI(w) = ⟨Q, Σ, q0, δ, F ⟩, where Q = {i | 1 ≤i ≤k}, q0 = 1, F = Q and for all qi ∈Q, σ ∈Σ: δ(qi, σ) =    qi+1 if σ = σi and i < k, ↑ if σ = σi and i = k, qi otherwise. Then MSI(w) is a minimal, trimmed DFA that recognizes the complement of SI(w), i.e., SI(w) = L(MSI(w)). Figure 4 illustrates the DFA representation of the complement of SI(aa) with Σ = {a, b, c}. It is easy to verify that the machine in Figure 4 accepts all and only those words which do not contain an aa subsequence. For any SPk language L = L(⟨Σ, G⟩) ̸= Σ∗, the first characterization (1) in Theorem 1 above yields a non-deterministic finite-state representation of L, which is a set A of DFA representations of complements of principal shuffle ideals of the elements of G. The trimmed automata product of this set yields a DFA, with the properties below (Rogers et al., to appear). Lemma 2 Let M be a trimmed DFA recognizing a SPk language constructed as described above. Then: 1. All states of M are accepting states: F = Q. 890 a b c b c b a c a b b c b b a b ǫ ǫ,a ǫ,b ǫ,c ǫ,a,b ǫ,b,c ǫ,a,c ǫ,a,b,c Figure 5: The DFA representation of the of the SP language given by G = ⟨{a, b, c}, {aa, bc}⟩. Names of the states reflect subsets of subsequences up to length 1 of prefixes of the language. Note this DFA is trimmed, but not minimal. 2. For all q1, q2 ∈Q and σ ∈Σ, if ˆd(q1, σ)↑ and ˆd(q1, w) = q2 for some w ∈Σ∗then ˆd(q2, σ)↑. (Missing edges propagate down.) Figure 5 illustrates with the DFA representation of the of the SP2 language given by G = ⟨{a, b, c}, {aa, bc}⟩. It is straightforward to verify that this DFA is identical (modulo relabeling of state names) to one obtained by the trimmed product of the DFA representations of the complement of the principal shuffle ideals of aa and bc, which are the prohibited subsequences. States in the DFA in Figure 5 correspond to the subsequences up to length 1 of the prefixes of the language. With this in mind, it follows that the DFA of Σ∗= L(Σ, Σk) has states which correspond to the subsequences up to length k −1 of the prefixes of Σ∗. Figure 6 illustrates such a DFA when k = 2 and Σ = {a, b, c}. In fact, these DFAs reveal the differences between SP languages and PT languages: they are exactly those expressed in Lemma 2. Within the state space defined by the subsequences up to length k −1 of the prefixes of the language, if the conditions in Lemma 2 are violated, then the DFAs describe languages that are PT but not SP. Pictorially, PT2 languages are obtained by arbitrarily removing arcs, states, and the finality of states from the DFA in Figure 6, and SP2 ones are obtained by non-arbitrarily removing them in accordance with Lemma 2. The same applies straightforwardly for any k (see Definition 3 below). a b c a b c b a c c a b a b c a c b b c a a b c ǫ ǫ,a ǫ,b ǫ,c ǫ,a,b ǫ,b,c ǫ,a,c ǫ,a,b,c Figure 6: A DFA representation of the of the SP2 language given by G = ⟨{a, b, c}, Σ2⟩. Names of the states reflect subsets of subsequences up to length 1 of prefixes of the language. Note this DFA is trimmed, but not minimal. 5 SP Distributions In the same way that SL distributions (n-gram models) generalize SL languages, SP distributions generalize SP languages. Recall that SP languages are characterizable by the intersection of the complements of principal shuffle ideals. SP distributions are similarly characterized. We begin with Piecewise-Testable distributions. Definition 3 A distribution D is k-Piecewise Testable (written D ∈PTDk) def ⇐⇒D can be described by a PDFA M = ⟨Q, Σ, q0, δ, F, T ⟩with 1. Q = {P≤k−1(w) : w ∈Σ∗} 2. q0 = P≤k−1(ǫ) 3. For all w ∈ Σ∗and all σ ∈ Σ, δ(P≤k−1(w), a) = P≤k−1(wa) 4. F and T satisfy Equation 1. In other words, a distribution is k-Piecewise Testable provided it can be represented by a PDFA whose structural components are the same (modulo renaming of states) as those of the DFA discussed earlier where states corresponded to the subsequences up to length k −1 of the prefixes of the language. The DFA in Figure 6 shows the 891 structure of a PDFA which describes a PT2 distribution as long as the assigned probabilities satisfy Equation 1. The following lemma follows directly from the finite-state representation of PTk distributions. Lemma 3 Let D belong to PTDk and let M = ⟨Q, Σ, q0, δ, F, T ⟩be a PDFA representing D defined according to Definition 3. PrD(σ1 . . . σn) = T(P≤k−1(ǫ), σ1) ·  Y 2≤i≤n T(P≤k−1(σ1 . . . σi−1), σi)  (4) · F(P≤k−1(w)) PTk distributions have 2|Σ|k−1(|Σ|+1) parameters (since there are 2|Σ|k−1 states and |Σ| + 1 possible events, i.e. transitions and finality). Let Pr(σ | #) and Pr(# | P≤k(w)) denote the probability (according to some D ∈PTDk) that a word begins with σ and ends after observing P≤k(w). Then Equation 4 can be rewritten in terms of conditional probability as PrD(σ1 . . . σn) = Pr(σ1 | #) ·  Y 2≤i≤n Pr(σi | P≤k−1(σ1 . . . σi−1))  (5) · Pr(# | P≤k−1(w)) Thus, the probability assigned to a word depends not on the observed contiguous sequences as in a Markov model, but on observed subsequences. Like SP languages, SP distributions can be defined in terms of the product of machines very similar to the complement of principal shuffle ideals. Definition 4 Let w ∈Σk−1 and w = σ1 · · · σk−1. Mw = ⟨Q, Σ, q0, δ, F, T ⟩is a w-subsequencedistinguishing PDFA (w-SD-PDFA) iff Q = Pfx(w), q0 = ǫ, for all u ∈Pfx(w) and each σ ∈Σ, δ(u, σ) = uσ iff uσ ∈Pfx(w) and u otherwise and F and T satisfy Equation 1. Figure 7 shows the structure of Ma which is almost the same as the complement of the principal shuffle ideal in Figure 4. The only difference is the additional self-loop labeled a on the rightmost state labeled a. Ma defines a family of distributions over Σ∗, and its states distinguish those b c a a a b c ǫ Figure 7: The structure of PDFA Ma. It is the same (modulo state names) as the DFA in Figure 4 except for the self-loop labeled a on state a. strings which contain a (state a) from those that do not (state ǫ). A set of PDFAs is a k-set of SDPDFAs iff, for each w ∈Σ≤k−1, it contains exactly one w-SD-PDFA. In the same way that missing edges propagate down in DFA representations of SP languages (Lemma 2), the final and transitional probabilities must propagate down in PDFA representations of SPk distributions. In other words, the final and transitional probabilities at states further along paths beginning at the start state must be determined by final and transitional probabilities at earlier states non-increasingly. This is captured by defining SP distributions as a product of k-sets of SD-PDFAs (see Definition 5 below). While the standard product based on coemission probability could be used for this purpose, we adopt a modified version of it defined for k-sets of SD-PDFAs: the positive co-emission probability. The automata product based on the positive co-emission probability not only ensures that the probabilities propagate as necessary, but also that such probabilities are made on the basis of observed subsequences, and not unobserved ones. This idea is familiar from n-gram models: the probability of σn given the immediately preceding sequence σ1 . . . σn−1 does not depend on the probability of σn given the other (n −1)-long sequences which do not immediately precede it, though this is a logical possibility. Let A be a k-set of SD-PDFAs. For each w ∈Σ≤k−1, let Mw = ⟨Qw, Σ, q0w, δw, Fw, Tw⟩ be the w-subsequence-distinguishing PDFA in A. The positive co-emission probability that σ is simultaneously emitted from states qǫ, . . . , qu from the statesets Qǫ, . . . Qu, respectively, of each SD892 PDFA in A is PCT(⟨σ, qǫ . . . qu⟩) = Y qw∈⟨qǫ...qu⟩ qw=w Tw(qw, σ) (6) Similarly, the probability that a word simultaneously ends at n states qǫ ∈Qǫ, . . . , qu ∈Qu is PCF(⟨qǫ . . . qu⟩) = Y qw∈⟨qǫ...qu⟩ qw=w Fw(qw) (7) In other words, the positive co-emission probability is the product of the probabilities restricted to those assigned to the maximal states in each Mw. For example, consider a 2-set of SDPDFAs A with Σ = {a, b, c}. A contains four PDFAs Mǫ, Ma, Mb, Mc. Consider state q = ⟨ǫ, ǫ, b, c⟩∈N A (this is the state labeled ǫ, b, c in Figure 6). Then CT(a, q) = Tǫ(ǫ, a)· Ta(ǫ, a)· Tb(b, a)· Tc(c, a) but PCT(a, q) = Tǫ(ǫ, a)· Tb(b, a)· Tc(c, a) since in PDFA Ma, the state ǫ is not the maximal state. The positive co-emission product (⊗+) is defined just as with co-emission probabilities, substituting PCT and PCF for CT and CF, respectively, in Definition 1. The definition of ⊗+ ensures that the probabilities propagate on the basis of observed subsequences, and not on the basis of unobserved ones. Lemma 4 Let k ≥1 and let A be a k-set of SDPDFAs. Then ⊗+S defines a well-formed probability distribution over Σ∗. Proof Since Mǫ belongs to A, it is always the case that PCT and PCF are defined. Wellformedness follows from the normalization term as in Definition 1. ⊣⊣⊣ Definition 5 A distribution D is k-Strictly Piecewise (written D ∈SPDk) def ⇐⇒D can be described by a PDFA which is the positive co-emission product of a k-set of subsequence-distinguishing PDFAs. By Lemma 4, SP distributions are well-formed. Unlike PDFAs for PT distributions, which distinguish 2|Σ|k−1 states, the number of states in a kset of SD-PDFAs is P i<k(i + 1)|Σ|i, which is Θ(|Σ|k+1). Furthermore, since each SD-PDFA only has one state contributing |Σ|+1 probabilities to the product, and since there are |Σ≤k| = |Σ|k−1 |Σ|−1 many SD-PDFAs in a k-set, there are |Σ|k −1 |Σ| −1 · (|Σ| + 1) = |Σ|k+1 + |Σ|k −|Σ| −1 |Σ| −1 parameters, which is Θ(|Σ|k). Lemma 5 Let D ∈SPDk. Then D ∈PTDk. Proof Since D ∈ SPDk, there is a k-set of subsequence-distinguishing PDFAs. The product of this set has the same structure as the PDFA given in Definition 3. ⊣⊣⊣ Theorem 2 A distribution D ∈SPDk if D can be described by a PDFA M = ⟨Q, Σ, q0, δ, F, T ⟩ satisfying Definition 3 and the following. For all w ∈Σ∗and all σ ∈Σ, let Z(w) = Y s∈P≤k−1(w) F(P≤k−1(s)) + X σ′∈Σ   Y s∈P≤k−1(w) T(P≤k−1(s), σ′)  (8) (This is the normalization term.) Then T must satisfy: T(P≤k−1(w), σ) = Q s∈P≤k−1(w) T(P≤k−1(s), σ) Z(w) (9) and F must satisfy: F(P≤k−1(w)) = Q s∈P≤k−1(w) F(P≤k−1(s)) Z(w) (10) Proof That SPDk satisfies Definition 3 Follows directly from Lemma 5. Equations 8-10 follow from the definition of positive co-emission probability. ⊣⊣⊣ The way in which final and transitional probabilities propagate down in SP distributions is reflected in the conditional probability as defined by Equations 9 and 10. In terms of conditional probability, Equations 9 and 10 mean that the probability that σi follows a sequence σ1 . . . σi−1 is not only a function of P≤k−1(σ1 . . . σi−1) (Equation 4) but further that it is a function of each subsequence in σ1 . . . σi−1 up to length k −1. 893 In particular, Pr(σi | P≤k−1(σ1 . . . σi−1)) is obtained by substituting Pr(σi | P≤k−1(s)) for T(P≤k−1(s), σ) and Pr(# | P≤k−1(s)) for F(P≤k−1(s)) in Equations 8, 9 and 10. For example, for a SP2 distribution, the probability of a given P≤1(bc) (state ǫ, b, c in Figure 6) is the normalized product of the probabilities of a given P≤1(ǫ), a given P≤1(b), and a given P≤1(c). To summarize, SP and PT distributions are regular deterministic. Unlike PT distributions, however, SP distributions can be modeled with only Θ(|Σ|k) parameters and Θ(|Σ|k+1) states. This is true even though SP distributions distinguish 2|Σ|k−1 states! Since SP distributions can be represented by a single PDFA, computing Pr(w) occurs in only Θ(|w|) for such PDFA. While such PDFA might be too large to be practical, Pr(w) can also be computed from the k-set of SD-PDFAs in Θ(|w|k) (essentially building the path in the product machine on the fly using Equations 4, 8, 9 and 10). 6 Estimating SP Distributions The problem of ML estimation of SPk distributions is reduced to estimating the parameters of the SD-PDFAs. Training (counting and normalization) occurs over each of these machines (i.e. each machine parses the entire corpus), which gives the ML estimates of the parameters of the distribution. It trivially follows that this training successfully estimates any D ∈SPDk. Theorem 3 For any D ∈SPDk, let D generate sample S. Let A be the k-set of SD-PDFAs which describes exactly D. Then optimizing the MLE of S with respect to each M ∈A guarantees that the distribution described by the positive co-emission product of N+ A approaches D as |S| increases. Proof The MLE estimate of S with respect to SPDk returns the parameter values that maximize the likelihood of S. The parameters of D ∈SPDk are found on the maximal states of each M ∈A. By definition, each M ∈A describes a probability distribution over Σ∗, and similarly defines a family of distributions. Therefore finding the MLE of S with respect to SPDk means finding the MLE estimate of S with respect to each of the family of distributions which each M ∈A defines, respectively. Optimizing the ML estimate of S for each M ∈A means that as |S| increases, the estimates ˆTM and ˆFM approach the true values TM and FM. It follows that as |S| increases, ˆTN+ A and ˆFN+ A approach the true values of TN+ A and FN+ A and consequently DN+ A approaches D. ⊣⊣⊣ We demonstrate learning long-distance dependencies by estimating SP2 distributions given a corpus from Samala (Chumash), a language with sibilant harmony.4 There are two classes of sibilants in Samala: [-anterior] sibilants like [s] and [>ts] and [+anterior] sibilants like [S] and [>tS].5 Samala words are subject to a phonological process wherein the last sibilant requires earlier sibilants to have the same value for the feature [anterior], no matter how many sounds intervene (Applegate, 1972). As a consequence of this rule, there are generally no words in Samala where [-anterior] sibilants follow [+anterior]. E.g. [StojonowonowaS] ‘it stood upright’ (Applegate 1972:72) is licit but not *[Stojonowonowas]. The results of estimating D ∈ SPD2 with the corpus is shown in Table 6. The results clearly demonstrate the effectiveness of the model: the probability of a [α anterior] sibilant given P≤1([-α anterior]) sounds is orders of magnitude less than given P≤1(α anterior]) sounds. x Pr(x | P≤1(y)) s >ts S >tS s 0.0335 0.0051 0.0011 0.0002 ⁀ts 0.0218 0.0113 0.0009 0. y S 0.0009 0. 0.0671 0.0353 >tS 0.0006 0. 0.0455 0.0313 Table 1: Results of SP2 estimation on the Samala corpus. Only sibilants are shown. 7 Conclusion SP distributions are the stochastic version of SP languages, which model long-distance dependencies. Although SP distributions distinguish 2|Σ|k−1 states, they do so with tractably many parameters and states because of an assumption that distinct subsequences do not interact. As shown, these distributions are efficiently estimable from positive data. As previously mentioned, we anticipate these models to find wide application in NLP. 4The corpus was kindly provided by Dr. Richard Applegate and drawn from his 2007 dictionary of Samala. 5Samala actually contrasts glottalized, aspirated, and plain variants of these sounds (Applegate, 1972). These laryngeal distinctions are collapsed here for easier exposition. 894 References R.B. Applegate. 1972. Inese˜no Chumash Grammar. Ph.D. thesis, University of California, Berkeley. R.B. Applegate. 2007. Samala-English dictionary : a guide to the Samala language of the Inese˜no Chumash People. Santa Ynez Band of Chumash Indians. Eric Bakovi´c. 2000. Harmony, Dominance and Control. Ph.D. thesis, Rutgers University. D. Beauquier and Jean-Eric Pin. 1991. Languages and scanners. Theoretical Computer Science, 84:3–21. Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543–566. J. A. Brzozowski and Imre Simon. 1973. Characterizations of locally testable events. Discrete Mathematics, 4:243–271. Noam Chomsky. 1956. Three models for the description of language. IRE Transactions on Information Theory. IT-2. J. S. Coleman and J. Pierrehumbert. 1997. Stochastic phonological grammars and acceptability. In Computational Phonology, pages 49–56. Somerset, NJ: Association for Computational Linguistics. Third Meeting of the ACL Special Interest Group in Computational Phonology. Colin de la Higuera. in press. Grammatical Inference: Learning Automata and Grammars. Cambridge University Press. Pedro Garc´ıa and Jos´e Ruiz. 1990. Inference of ktestable languages in the strict sense and applications to syntactic pattern recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 9:920–925. Pedro Garc´ıa and Jos´e Ruiz. 1996. Learning kpiecewise testable languages from positive data. In Laurent Miclet and Colin de la Higuera, editors, Grammatical Interference: Learning Syntax from Sentences, volume 1147 of Lecture Notes in Computer Science, pages 203–210. Springer. Pedro Garcia, Enrique Vidal, and Jos´e Oncina. 1990. Learning locally testable languages in the strict sense. In Proceedings of the Workshop on Algorithmic Learning Theory, pages 325–338. Gunnar Hansson. 2001. Theoretical and typological issues in consonant harmony. Ph.D. thesis, University of California, Berkeley. Bruce Hayes and Colin Wilson. 2008. A maximum entropy model of phonotactics and phonotactic learning. Linguistic Inquiry, 39:379–440. Jeffrey Heinz. 2007. The Inductive Learning of Phonotactic Patterns. Ph.D. thesis, University of California, Los Angeles. Jeffrey Heinz. to appear. Learning long distance phonotactics. Linguistic Inquiry. John Hopcroft, Rajeev Motwani, and Jeffrey Ullman. 2001. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. Frederick Jelenik. 1997. Statistical Methods for Speech Recognition. MIT Press. C. Douglas Johnson. 1972. Formal Aspects of Phonological Description. The Hague: Mouton. A. K. Joshi. 1985. Tree-adjoining grammars: How much context sensitivity is required to provide reasonable structural descriptions? In D. Dowty, L. Karttunen, and A. Zwicky, editors, Natural Language Parsing, pages 206–250. Cambridge University Press. Daniel Jurafsky and James Martin. 2008. Speech and Language Processing: An Introduction to Natural Language Processing, Speech Recognition, and Computational Linguistics. Prentice-Hall, 2nd edition. Ronald Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics, 20(3):331–378. Gregory Kobele. 2006. Generating Copies: An Investigation into Structural Identity in Language and Grammar. Ph.D. thesis, University of California, Los Angeles. Leonid (Aryeh) Kontorovich, Corinna Cortes, and Mehryar Mohri. 2008. Kernel methods for learning languages. Theoretical Computer Science, 405(3):223 – 236. Algorithmic Learning Theory. M. Lothaire, editor. 1997. Combinatorics on Words. Cambridge University Press, Cambridge, UK, New York. A. A. Markov. 1913. An example of statistical study on the text of ‘eugene onegin’ illustrating the linking of events to a chain. Robert McNaughton and Simon Papert. 1971. Counter-Free Automata. MIT Press. A. Newell, S. Langer, and M. Hickey. 1998. The rˆole of natural language processing in alternative and augmentative communication. Natural Language Engineering, 4(1):1–16. Dominique Perrin and Jean-Eric Pin. 1986. FirstOrder logic and Star-Free sets. Journal of Computer and System Sciences, 32:393–406. Catherine Ringen. 1988. Vowel Harmony: Theoretical Implications. Garland Publishing, Inc. 895 James Rogers and Geoffrey Pullum. to appear. Aural pattern recognition experiments and the subregular hierarchy. Journal of Logic, Language and Information. James Rogers, Jeffrey Heinz, Matt Edlefsen, Dylan Leeman, Nathan Myers, Nathaniel Smith, Molly Visscher, and David Wellcome. to appear. On languages piecewise testable in the strict sense. In Proceedings of the 11th Meeting of the Assocation for Mathematics of Language. Sharon Rose and Rachel Walker. 2004. A typology of consonant agreement as correspondence. Language, 80(3):475–531. Jacques Sakarovitch and Imre Simon. 1983. Subwords. In M. Lothaire, editor, Combinatorics on Words, volume 17 of Encyclopedia of Mathematics and Its Applications, chapter 6, pages 105–134. Addison-Wesley, Reading, Massachusetts. Stuart Shieber. 1985. Evidence against the contextfreeness of natural language. Linguistics and Philosophy, 8:333–343. Imre Simon. 1975. Piecewise testable events. In Automata Theory and Formal Languages: 2nd Grammatical Inference conference, pages 214–222, Berlin ; New York. Springer-Verlag. Howard Straubing. 1994. Finite Automata, Formal Logic and Circuit Complexity. Birkh¨auser. Wolfgang Thomas. 1982. Classifying regular events in symbolic logic. Journal of Computer and Systems Sciences, 25:360–376. Enrique Vidal, Franck Thollard, Colin de la Higuera, Francisco Casacuberta, and Rafael C. Carrasco. 2005a. Probabilistic finite-state machines-part I. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1013–1025. Enrique Vidal, Frank Thollard, Colin de la Higuera, Francisco Casacuberta, and Rafael C. Carrasco. 2005b. Probabilistic finite-state machines-part II. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1026–1039. 896
2010
91
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 897–906, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics String Extension Learning Jeffrey Heinz University of Delaware Newark, Delaware, USA [email protected] Abstract This paper provides a unified, learningtheoretic analysis of several learnable classes of languages discussed previously in the literature. The analysis shows that for these classes an incremental, globally consistent, locally conservative, set-driven learner always exists. Additionally, the analysis provides a recipe for constructing new learnable classes. Potential applications include learnable models for aspects of natural language and cognition. 1 Introduction The problem of generalizing from examples to patterns is an important one in linguistics and computer science. This paper shows that many disparate language classes, many previously discussed in the literature, have a simple, natural and interesting (because non-enumerative) learner which exactly identifies the class in the limit from distribution-free, positive evidence in the sense of Gold (Gold, 1967).1 These learners are called String Extension Learners because each string in the language can be mapped (extended) to an element of the grammar, which in every case, is conceived as a finite set of elements. These learners have desirable properties: they are incremental, globally consistent, and locally conservative. Classes previously discussed in the literature which are string extension learnable include the Locally Testable (LT) languages, the Locally Testable Languages in the Strict Sense 1The allowance of negative evidence (Gold, 1967) or restricting the kinds of texts the learner is required to succeed on (i.e. non-distribution-free evidence) (Gold, 1967; Horning, 1969; Angluin, 1988) admits the learnability of the class of recursively enumerable languages. Classes of languages learnable in the harder, distribution-free, positive-evidenceonly settings are due to structural properties of the language classes that permit generalization (Angluin, 1980b; Blumer et al., 1989). That is the central interest here. (Strictly Local, SL) (McNaughton and Papert, 1971; Rogers and Pullum, to appear), the Piecewise Testable (PT) languages (Simon, 1975), the Piecewise Testable languages in the Strict Sense (Strictly Piecewise, SP) (Rogers et al., 2009), the Strongly Testable languages (Beauquier and Pin, 1991), the Definite languages (Brzozowski, 1962), and the Finite languages, among others. To our knowledge, this is the first analysis which identifies the common structural elements of these language classes which allows them to be identifiable in the limit from positive data: each language class induces a natural partition over all logically possible strings and each language in the class is the union of finitely many blocks of this partition. One consequence of this analysis is a recipe for constructing new learnable classes. One notable case is the Strictly Piecewise (SP) languages, which was originally motivated for two reasons: the learnability properties discussed here and its ability to describe long-distance dependencies in natural language phonology (Heinz, 2007; Heinz, to appear). Later this class was discovered to have several independent characterizations and form the basis of another subregular hierarchy (Rogers et al., 2009). It is expected string extension learning will have applications in linguistic and cognitive models. As mentioned, the SP languages already provide a novel hypothesis of how long-distance dependencies in sound patterns are learned. Another example is the Strictly Local (SL) languages which are the categorical, symbolic version of n-gram models, which are widely used in natural language processing (Jurafsky and Martin, 2008). Since the SP languages also admit a probabilistic variant which describe an efficiently estimable class of distributions (Heinz and Rogers, 2010), it is plausible to expect the other classes will as well, though this is left for future research. String extension learners are also simple, mak897 ing them accessible to linguists without a rigorous mathematical background. This paper is organized as follow. §2 goes over basic notation and definitions. §3 defines string extension grammars, languages, and language classes and proves some of their fundamental properties. §4 defines string extension learners and proves their behavior. §5 shows how important subregular classes are string extension language classes. §6 gives examples of nonregular and infinite language classes which are string extension learnable. §7 summarizes the results, and discusses lines of inquiry for future research. 2 Preliminaries This section establishes notation and recalls basic definitions for formal languages, the paradigm of identification in the limit from positive data (Gold, 1967). Familiarity with the basic concepts of sets, functions, and sequences is assumed. For some set A, P(A) denotes the set of all subsets of A and Pfin(A) denotes the set of all finite subsets of A. If f is a function such that f : A →B then let f ⋄(a) = {f(a)}. Thus, f ⋄: A →P(B) (note f ⋄is not surjective). A set π of nonempty subsets of S is a partition of S iff the elements of π (called blocks) are pairwise disjoint and their union equals S. Σ denotes a fixed finite set of symbols, the alphabet. Let Σn, Σ≤n, Σ∗, Σ+ denote all strings formed over this alphabet of length n, of length less than or equal to n, of any finite length, and of any finite length strictly greater than zero, respectively. The term word is used interchangeably with string. The range of a string w is the set of symbols which are in w. The empty string is the unique string of length zero denoted λ. Thus range(λ) = ∅. The length of a string u is denoted by |u|, e.g. |λ| = 0. A language L is some subset of Σ∗. The reverse of a language Lr = {wr : w ∈L}. Gold (1967) establishes a learning paradigm known as identification in the limit from positive data. A text is an infinite sequence whose elements are drawn from Σ∗∪{#} where # represents a non-expression. The ith element of t is denoted t(i), and t[i] denotes the finite sequence t(0), t(1), . . . t(i). Following Jain et al. (1999), let SEQ denote the set of all possible finite sequences: SEQ = {t[i] : t is a text and i ∈ N} The content of a text is defined below. content(t) = {w ∈Σ∗: ∃n ∈ N such that t(n) = w} A text t is a positive text for a language L iff content(t) = L. Thus there is only one text t for the empty language: for all i, t(i) = #. A learner is a function φ which maps initial finite sequences of texts to grammars, i.e. φ : SEQ →G. The elements of G (the grammars) generate languages in some well-defined way. A learner converges on a text t iff there exists i ∈ N and a grammar G such that for all j > i, φ(t[j]) = G. For any grammar G, the language it generates is denoted L(G). A learner φ identifies a language L in the limit iff for any positive text t for L, φ converges on t to grammar G and L(G) = L. Finally, a learner φ identifies a class of languages L in the limit iff for any L ∈L, φ identifies L in the limit. Angluin (1980b) provides necessary and sufficient properties of language classes which are identifiable in the limit from positive data. A learner φ of language class L is globally consistent iff for each i and for all texts t for some L ∈L, content(t[i]) ⊆L(φ(t[i])). A learner φ is locally conservative iff for each i and for all texts t for some L ∈L, whenever φ(t[i]) ̸= φ(t[i −1]), it is the case that t(i) ̸∈L(φ([i−1])). These terms are from Jain et al. (2007). Also, learners which do not depend on the order of the text are called set-driven (Jain et al., 1999, p. 99). 3 Grammars and Languages Consider some set A. A string extension function is a total function f : Σ∗→Pfin(A). It is not required that f be onto. Denote the class of functions which have this general form SEF. Each string extension function is naturally associated with some formal class of grammars and languages. These functions, grammars, and languages are called string extension functions, grammars, and languages, respectively. Definition 1 Let f ∈SEF. 1. A grammar is a finite subset of A. 2. The language of grammar G is Lf(G) = {w ∈Σ∗: f(w) ⊆G} 898 3. The class of languages obtained by all possible grammars is Lf = {Lf(G) : G ∈Pfin(A)} The subscript f is omitted when it is understood from context. A function f ∈SEF naturally induces a partition πf over Σ∗. Strings u and v are equivalent (u ∼f v) iff f(u) = f(v). Theorem 1 Every language L ∈Lf is a finite union of blocks of πf. Proof: Follows directly from the definition of ∼f and the finiteness of string extension grammars. 2 We return to this result in §6. Theorem 2 Lf is closed under intersection. Proof: We show L1∩L2 = L(G1∩G2). Consider any word w belonging to L1 and L2. Then f(w) is a subset of G1 and of G2. Thus f(w) ⊆G1 ∩ G2, and therefore w ∈L(G1 ∩G2). The other inclusion follows similarly. 2 String extension language classes are not in general closed under union or reversal (counterexamples to union closure are given in §5.1 and to reversal closure in §6.) It is useful to extend the domain of the function f from strings to languages. f(L) = [ w∈L f(w) (1) An element g of grammar G for language L = Lf(G) is useful iff g ∈f(L). An element is useless if it is not useful. A grammar with no useless elements is called canonical. Remark 1 Fix a function f ∈SEF. For every L ∈Lf, there is a canonical grammar, namely f(L). In other words, L = L(f(L)). Lemma 1 Let L, L′ ∈Lf. L ⊆L′ iff f(L) ⊆ f(L′) Proof: (⇒) Suppose L ⊆L′ and consider any g ∈f(L). Since g is useful, there is a w ∈L such that g ∈f(w). But f(w) ⊆f(L′) since w ∈L′. (⇐) Suppose f(L) ⊆f(L′) and consider any w ∈L. Then f(w) ⊆f(L) so by transitivity, f(w) ⊆f(L′). Therefore w ∈L′. 2 The significance of this result is that as the grammar G monotonically increases, the language L(G) monotonically increases too. The following result can now be proved, used in the next section on learning.2 Theorem 3 For any finite L0 ⊆ Σ∗, L = L(f(L0)) is the smallest language in Lf containing L0. Proof: Clearly L0 ⊆L. Suppose L′ ∈Lf and L0 ⊆L′. It follows directly from Lemma 1 that L ⊆L′ (since f(L) = f(L0) ⊆f(L′)). 2 4 String Extension Learning Learning string extension classes is simple. The initial hypothesis of the learner is the empty grammar. The learner’s next hypothesis is obtained by applying function f to the current observation and taking the union of that set with the previous one. Definition 2 For all f ∈SEF and for all t ∈ SEQ, define φf as follows: φf(t[i]) =    ∅ if i = −1 φf(t[i −1]) if t(i) = # φf(t[i −1]) ∪f(t(i)) otherwise By convention, the initial state of the grammar is given by φ(t[−1]) = ∅. The learner φf exemplifies string extension learning. Each individual string in the text reveals, by extension with f, aspects of the canonical grammar for L ∈Lf. Theorem 4 φf is globally consistent, locally conservative, and set-driven. Proof: Global consistness and local conservativeness follow immediately from Definition 2. For set-drivenness, witness (by Definition 2) it is the case that for any text t and any i ∈ N, φ(t[i]) = f(content(t[i])). 2 The key to the proof that φf identifies Lf in the limit from positive data is the finiteness of G for all L(G) ∈L. The idea is that there is a point in the text in which every element of the grammar has been seen because (1) there are only finitely many useful elements of G, and (2) the learner is guaranteed to see a word in L which yields (via f) each element of G at some point (since the learner receives a positive text for L). Thus at this point 2The requirement in Theorem 3 that L0 be finite can be dropped if the qualifier “in Lf” be dropped as well. This can be seen when one considers the identity function and the class of finite languages. (The identity function is a string extension function, see §6.) In this case, id(Σ∗) = Σ∗, but Σ∗is not a member of Lfin. However since the interest here is learners which generalize on the basis of finite experience, Theorem 3 is sufficient as is. 899 the learner φ is guaranteed to have converged to the target G as no additional words will add any more elements to the learner’s grammar. Lemma 2 For all L ∈Lf, there is a finite sample S such that L is the smallest language in Lf containing S. S is called a characteristic sample of L in Lf (S is also called a tell-tale). Proof: For L ∈Lf, construct the sample S as follows. For each g ∈f(L), choose some word w ∈L such that g ∈f(w). Since f(L) is finite (Remark 1), S is finite. Clearly f(S) = f(L) and thus L = L(f(S)). Therefore, by Theorem 3, L is the smallest language in Lf containing S. 2 Theorem 5 Fix f ∈SEF. Then φf identifies Lf in the limit. Proof: For any L ∈Lf, there is a characteristic finite sample S for L (Lemma 2). Thus for any text t for L, there is i such that S ⊆content(t[i]). Thus for any j > i, φ(t(j)) is the smallest language in Lf containing S by Theorem 3 and Lemma 2. Thus, φ(t(j)) = f(S) = f(L). 2 An immediate corollary is the efficiency of φf in the length of the sample, provided f is efficient in the length of the string (de la Higuera, 1997). Corollary 1 φf is efficient in the length of the sample iff f is efficiently computable in the length of a string. To summarize: string extension grammars are finite subsets of some set A. The class of languages they generate are determined by a function f which maps strings to finite subsets of A (chunks of grammars). Since the size of the canonical grammars is finite, a learner which develops a grammar on the basis of the observed words and the function f identifies this class exactly in the limit from positive data. It also follows that if f is efficient in the length of the string then φf is efficient in the length of the sample and that φf is globally consistent, locally conservative, and setdriven. It is striking that such a natural and general framework for generalization exists and that, as will be shown, a variety of language classes can be expressed given the choice of f. 5 Subregular examples This section shows how classes which make up the subregular hierarchies (McNaughton and Papert, 1971) are string extension language classes. Readers are referred to Rogers and Pullum (2007) and Rogers et al. (2009) for an introduction to the subregular hierarchies, as well as their relevance to linguistics and cognition. 5.1 K-factor languages The k-factors of a word are the contiguous subsequences of length k in w. Consider the following string extension function. Definition 3 For some k ∈ N, let fack(w) = {x ∈Σk : ∃u, v ∈Σ∗ such that w = uxv} when k ≤|w| and {w} otherwise Following the earlier definitions, for some k, a grammar G is a subset of Σ≤k and a word w belongs to the language of G iff fack(w) ⊆G. Example 1 Let Σ = {a, b} and consider grammars G = {λ, a, aa, ab, ba}. Then L(G) = {λ, a} ∪{w : |w| ≥2 and w ̸∈Σ∗bbΣ∗}. The 2factor bb is a prohibited 2-factor for L(G). Clearly, L(G) ∈Lfac2. Languages in Lfack make distinctions based on which k-factors are permitted or prohibited. Since fack ∈SEF, it follows immediately from the results in §§3-4 that the k-factor languages are closed under intersection, and each has a characteristic sample. For example, a characteristic sample for the 2-factor language in Example 1 is {λ, a, ab, ba, aa}; i.e. the canonical grammar itself. It follows from Theorem 5 that the class of k-factor languages is identifiable in the limit by φfack. The learner φfac2 with a text from the language in Example 1 is illustrated in Table 1. The class Lfack is not closed under union. For example for k = 2, consider L1 = L({λ, a, b, aa, bb, ba}) and L2 = L({λ, a, b, aa, ab, bb}). Then L1 ∪L2 excludes string aba, but includes ab and ba, which is not possible for any L ∈Lfack. K-factors are used to define other language classes, such as the Strictly Local and Locally Testable languages (McNaughton and Papert, 1971), discussed in §5.4 and §5.5. 5.2 Strictly k-Piecewise languages The Strictly k-Piecewise (SPk) languages (Rogers et al., 2009) can be defined with a function whose co-domain is P(Σ≤k). However unlike the function fack, the function SPk, does not require that the k-length subsequences be contiguous. 900 i t(i) fac2(t(i)) Grammar G L(G) -1 ∅ ∅ 0 aaaa {aa} {aa} aaa∗ 1 aab {aa, ab} {aa, ab} aaa∗∪aaa∗b 2 a {a} {a, aa, ab} aa∗∪aa∗b . . . Table 1: The learner φfac2 with a text from the language in Example 1. Boldtype indicates newly added elements to the grammar. A string u = a1 . . . ak is a subsequence of string w iff ∃v0, v1, . . . vk ∈Σ∗such that w = v0a1v1 . . . akvk. The empty string λ is a subsequence of every string. When u is a subsequence of w we write u ⊑w. Definition 4 For some k ∈ N, SPk(w) = {u ∈Σ≤k : u ⊑w} In other words, SPk(w) returns all subsequences, contiguous or not, in w up to length k. Thus, for some k, a grammar G is a subset of Σ≤k. Following Definition 1, a word w belongs to the language of G only if SP2(w) ⊆G.3 Example 2 Let Σ = {a, b} and consider the grammar G = {λ, a, b, aa, ab, ba}. Then L(G) = Σ∗\(Σ∗bΣ∗bΣ∗). As seen from Example 2, SP languages encode long-distance dependencies. In Example 2, L prohibits a b from following another b in a word, no matter how distant. Table 2 illustrates φSP2 learning the language in Example 2. Heinz (2007,2009a) shows that consonantal harmony patterns in natural language are describable by such SP2 languages and hypothesizes that humans learn them in the way suggested by φSP2. Strictly 2-Piecewise languages have also been used in models of reading comprehension (Whitney, 2001; Grainger and Whitney, 2004; Whitney and Cornelissen, 2008) as well as text classification(Lodhi et al., 2002; Cancedda et al., 2003) (see also (Shawe-Taylor and Christianini, 2005, chap. 11)). 5.3 K-Piecewise Testable languages A language L is k-Piecewise Testable iff whenever strings u and v have the same subsequences 3In earlier work, the function SP2 has been described as returning the set of precedence relations in w, and the language class LSP2 was called the precedence languages (Heinz, 2007; Heinz, to appear). of length at most k and u is in L, then v is in L as well (Simon, 1975; Simon, 1993; Lothaire, 2005). A language L is said to be Piecewise-Testable (PT) if it is k-Piecewise Testable for some k ∈ N. If k is fixed, the k-Piecewise Testable languages are identifiable in the limit from positive data (Garc´ıa and Ruiz, 1996; Garc´ıa and Ruiz, 2004). More recently, the Piecewise Testable languages has been shown to be linearly separable with a subsequence kernel (Kontorovich et al., 2008). The k-Piecewise Testable languages can also be described with the function SP ⋄ k . Recall that f ⋄(a) = {f(a)}. Thus functions SP ⋄ k define grammars as a finite list of sets of subsequences up to length k that may occur in words in the language. This reflects the fact that the k-Piecewise Testable languages are the boolean closure of the Strictly k-Piecewise languages.4 5.4 Strictly k-Local languages To define the Strictly k-Local languages, it is necessary to make a pointwise extension to the definitions in §3. Definition 5 For sets A1, . . . , An, suppose for each i, fi : Σ∗ →Pfin(Ai), and let f = (f1, . . . , fn). 1. A grammar G is a tuple (G1, . . . , Gn) where G1 ∈Pfin(A1), . . . , Gn ∈Pfin(An). 2. If for any w ∈Σ∗, each fi(w) ⊆Gi for all 1 ≤i ≤n, then f(w) is a pointwise subset of G, written f(w) ⊆· G. 3. The language of grammar G is Lf(G) = {w : f(w) ⊆· G} 4. The class of languages obtained by all such possible grammars G is Lf. 4More generally, it is not hard to show that Lf⋄is the boolean closure of Lf. 901 i t(i) SP2(t(i)) Grammar G Language of G -1 ∅ ∅ 0 aaaa {λ, a, aa} {λ, a, aa} a∗ 1 aab {λ, a, b, aa, ab} {λ, a, aa, b, ab} a∗∪a∗b 2 baa {λ, a, b, aa, ba} {λ, a, b, aa, ab, ba} Σ∗\(Σ∗bΣ∗bΣ∗) 3 aba {λ, a, b, ab, ba} {λ, a, b, aa, ab, ba} Σ∗\(Σ∗bΣ∗bΣ∗) . . . Table 2: The learner φSP2 with a text from the language in Example 2. Boldtype indicates newly added elements to the grammar. These definitions preserve the learning results of §4. Note that the characteristic sample of L ∈ Lf will be the union of the characteristic samples of each fi and the language Lf(G) is the intersection of Lfi(Gi). Locally k-Testable Languages in the Strict Sense (Strictly k-Local) have been studied by several researchers (McNaughton and Papert, 1971; Garcia et al., 1990; Caron, 2000; Rogers and Pullum, to appear), among others. We follow the definitions from (McNaughton and Papert, 1971, p. 14), effectively encoded in the following functions. Definition 6 Fix k ∈ N. Then the (left-edge) prefix of length k, the (right-edge) suffix of length k, and the interior k-factors of a word w are Lk(w) = {u ∈Σk : ∃v ∈Σ∗such that w = uv} Rk(w) = {u ∈Σk : ∃v ∈Σ∗such that w = vu} Ik(w) = fack(w)\(Lk(w) ∪Rk(w)) Example 3 Suppose w = abcba. Then L2(w) = {ab}, R2(w) = {ba} and I2(w) = {bc, cb}. Example 4 Suppose |w| = k. Then Lk(w) = Rk(w) = {w} and Ik(w) = ∅. Example 5 Suppose |w| is less than k. Then Lk(w) = Rk(w) = ∅and Ik(w) = {w}. A language L is k-Strictly Local (k-SL) iff for all w ∈L, there exist sets L, R, and I such that w ∈L iff Lk(w) ⊆L, Rk(w) ⊆R, and Ik(w) ⊆I. McNaughton and Papert note that if w is of length less than k than L may be perfectly arbitrary about w. This can now be expressed as the string extension function: LRIk(w) = (Lk(w), Rk(w), Ik(w)) Thus for some k, a grammar G is triple formed by taking subsets of Σk, Σk, and Σ≤k, respectively. A word w belongs to the language of G only if LRIk(w) ⊆· G. Clearly, LLRIk = kSL, and henceforth we refer to this class as k-SL. Since, for fixed k, LRIk ∈SEF, all of the learning results in §4 apply. 5.5 Locally k-Testable languages The Locally k-testable languages (k-LT) are originally defined in McNaughton and Papert (1971) and are the subject of several studies (Brzozowski and Simon, 1973; McNaughton, 1974; Kim et al., 1991; Caron, 2000; Garc´ıa and Ruiz, 2004; Rogers and Pullum, to appear). A language L is k-testable iff for all w1, w2 ∈ Σ∗such that |w1| ≥k and |w2| ≥k, and LRIk(w1) = LRIk(w2) then either both w1, w2 belong to L or neither do. Clearly, every language in k-SL belongs to k-LT. However k-LT properly include k-SL because a k-testable language only distinguishes words whenever LRIk(w1) ̸= LRIk(w2). It is known that the k-LT languages are the boolean closure of the k-SL (McNaughton and Papert, 1971). The function LRI⋄ k exactly expresses k-testable languages. Informally, each word w is mapped to a set containing a single element, this element is the triple LRIk(w). Thus a grammar G is a subset of the triples used to define k-SL. Clearly, LLRI⋄ k = k-LT since it is the boolean closure of LLRIk. Henceforth we refer to LLRI⋄ k as the kLocally Testable (k-LT) languages. 5.6 Generalized subsequence languages Here we introduce generalized subsequence functions, a general class of functions to which the SPk and fack functions belong. Like those functions, generalized subsequence functions map words to a set of subsequences found within the words. These functions are instantiated by a vector whose number of coordinates determine how many times a subsequence may be discontiguous 902 and whose coordinate values determine the length of each contiguous part of the subsequence. Definition 7 For some n ∈ N, let ⃗v = ⟨v0, v1, . . . , vn⟩, where each vi ∈ N. Let k be the length of the subsequences; i.e. k = Pn 0 vi. f⃗v(w) = {u ∈Σk : ∃x0, . . . , xn, u0, . . . , un+1 ∈Σ∗ such that w = u0x0u1x1, . . . , unxnun+1 and |xi| = vi for all 0 ≤i ≤n} when k ≤|w|, and{w} otherwise The following examples help make the generalized subsequence functions clear. Example 6 Let ⃗v = ⟨2⟩. Then f⟨2⟩= fac2. Generally, f⟨k⟩= fack. Example 7 Let ⃗v = ⟨1, 1⟩. Then f⟨1,1⟩= SP2. Generally, if ⃗v = ⟨1, . . . 1⟩with |⃗v| = k. Then f⃗v = SPk. Example 8 Let ⃗v = ⟨3, 2, 1⟩and a, b, c, d, e, f∈ Σ. Then Lf⟨3,2,1⟩includes languages which prohibit strings w which contain subsequences abcdef where abc and de must be contiguous in w and abcdef is a subsequence of w. Generalized subsequence languages make different kinds of distinctions to be made than PT and LT languages. For example, the language in Example 8 is neither k-LT nor k′-PT for any values k, k′. Generalized subsequence languages properly include the k-SP and k-SL classes (Examples 6 and 7), and the boolean closure of the subsequence languages (f ⋄ ⃗v ) properly includes the LT and PT classes. Since for any ⃗v, f⃗v and f ⋄ ⃗v are string extension functions the learning results in §4 apply. Note that f⃗v(w) is computable in time O(|w|k) where k is the length of the maximal subsequences determined by ⃗v. 6 Other examples This section provides examples of infinite and nonregular language classes that are string extension learnable. Recall from Theorem 1 that string extension languages are finite unions of blocks of the partition of Σ∗induced by f. Assuming the blocks of this partition can be enumerated, the range of f can be construed as Pfin(N). grammar G Language of G ∅ ∅ {0} anbn {1} Σ∗\anbn {0, 1} Σ∗ Table 3: The language class Lf from Example 9 In the examples considered so far, the enumeration of the blocks is essentially encoded in particular substrings (or tuples of substrings). However, much less clever enumerations are available. Example 9 Let A = {0,1} and consider the following function: f(w) =  0 iff w ∈anbn 1 otherwise The function f belongs to SEF because it is maps strings to a finite co-domain. Lf has four languages shown in Table 3. The language class in Example 9 is not regular because it includes the well-known context-free language anbn. This collection of languages is also not closed under reversal. There are also infinite language classes that are string extension language classes. Arguably the simplest example is the class of finite languages, denoted Lfin. Example 10 Consider the function id which maps words in Σ∗to their singleton sets, i.e. id(w) = {w}.5 A grammar G is then a finite subset of Σ∗, and so L(G) is just a finite set of words in Σ∗; in fact, L(G) = G. It follows that Lid = Lfin. It can be easily seen that the function id induces the trivial partition over Σ∗, and languages are just finite unions of these blocks. The learner φid makes no generalizations at all, and only remembers what it has observed. There are other more interesting infinite string extension classes. Here is one relating to the Parikh map (Parikh, 1966). For all a ∈Σ, let fa(w) be the set containing n where n is the number of times the letter a occurs in the string w. For 5Strictly speaking, this is not the identity function per se, but it is as close to the identity function as one can get since string extension functions are defined as mappings from strings to sets. However, once the domain of the function is extended (Equation 1), then it follows that id is the identity function when its argument is a set of strings. 903 example fa(babab) = {2}. Thus fa is a total function mapping strings to singleton sets of natural numbers, so it is a string extension function. This function induces an infinite partition of Σ∗, where the words in any particular block have the same number of letters a. It is convenient to enumerate the blocks according to how many occurrences of the letter a may occur in words within the block. Hence, B0 is the block whose words have no occurrences of a, B1 is the block whose words have one occurrence of a, and so on. In this case, a grammar G is a finite subset of N, e.g. {2, 3, 4}. L(G) is simply those words which have either 2, 3, or 4, occurrences of the letter a. Thus Lfa is an infinite class, which contains languages of infinite size, which is easily identified in the limit from positive data by φfa. This section gave examples of nonregular and nonfinite string extension classes by pursuing the implications of Theorem 1, which established that f ∈SEF partition Σ∗into blocks of which languages are finite unions thereof. The string extension function f provides an effective way of encoding all languages L in Lf because f(L) encodes a finite set, the grammar. 7 Conclusion and open questions One contribution of this paper is a unified way of thinking about many formal language classes, all of which have been shown to be identifiable in the limit from positive data by a string extension learner. Another contribution is a recipe for defining classes of languages identifiable in the limit from positive data by this kind of learner. As shown, these learners have many desirable properties. In particular, they are globally consistent, locally conservative, and set-driven. Additionally, the learner is guaranteed to be efficient in the size of the sample, provided the function f itself is efficient in the length of the string. Several additional questions of interest remain open for theoretical linguistics, theoretical computer science, and computational linguistics. For theoretical linguistics, it appears that the string extension function f = (LRI3, P2), which defines a class of languages which obey restrictions on both contiguous subsequences of length 3 and on discontiguous subsequences of length 2, provides a good first approximation to the segmental phonotactic patterns in natural languages (Heinz, 2007). The string extension learner for this class is essentially two learners: φLRI3 and φP2, operating simultaneously.6 The learners make predictions about generalizations, which can be tested in artificial language learning experiments on adults and infants (Rogers and Pullum, to appear; Chambers et al., 2002; Onishi et al., 2003; Cristi´a and Seidl, 2008).7 For theoretical computer science, it remains an open question what property holds of functions f in SEF to ensure that Lf is regular, contextfree, or context-sensitive. For known subregular classes, there are constructions that provide deterministic automata that suggest the relevant properties. (See, for example, Garcia et al. (1990) and Garica and Ruiz (1996).) Also, Timo K¨otzing and Samuel Moelius (p.c.) suggest that the results here may be generalized along the following lines. Instead of defining the function f as a map from strings to finite subsets, let f be a function from strings to elements of a lattice. A grammar G is an element of the lattice and the language of the G are all strings w such that f maps w to a grammar less than G. Learners φf are defined as the least upper bound of its current hypothesis and the grammar to which f maps the current word.8 Kasprzik and K¨otzing (2010) develop this idea and demonstrate additional properties of string extension classes and learning, and show that the pattern languages (Angluin, 1980a) form a string extension class.9 Also, hyperplane learning (Clark et al., 2006a; Clark et al., 2006b) and function-distinguishable learning (Fernau, 2003) similarly associate language classes with functions. How those analyses relate to the current one remains open. Finally, since the stochastic counterpart of kSL class is the n-gram model, it is plausible that probabilistic string extension language classes can form the basis of new natural language processing techniques. (Heinz and Rogers, 2010) show 6This learner resembles what learning theorists call parallel learning (Case and Moelius, 2007) and what cognitive scientists call modular learning (Gallistel and King, 2009). 7I conjecture that morphological and syntactic patterns are generally not amenable to a string extension learning analysis because these patterns appear to require a paradigm, i.e. a set of data points, before any conclusion can be confidently drawn about the generating grammar. Stress patterns also do not appear to be amenable to a string extension learning (Heinz, 2007; Edlefsen et al., 2008; Heinz, 2009). 8See also Lange et al. (2008, Theorem 15) and Case et al. (1999, pp.101-103). 9The basic idea is to consider the lattice L = ⟨Lfin, ⊇⟩. Each element of L is a finite set of strings representing the intersection of all pattern languages consistent with this set. 904 how to efficiently estimate k-SP distributions, and it is conjectured that the other string extension language classes can be recast as classes of distributions, which can also be successfully estimated from positive evidence. Acknowledgments This work was supported by a University of Delaware Research Fund grant during the 20082009 academic year. I would like to thank John Case, Alexander Clark, Timo K¨otzing, Samuel Moelius, James Rogers, and Edward Stabler for valuable discussion. I would also like to thank Timo K¨otzing for careful reading of an earlier draft and for catching some errors. Remaining errors are my responsibility. References Dana Angluin. 1980a. Finding patterns common to a set of strings. Journal of Computer and System Sciences, 21:46–62. Dana Angluin. 1980b. Inductive inference of formal languages from positive data. Information Control, 45:117–135. Dana Angluin. 1988. Identifying languages from stochastic examples. Technical Report 614, Yale University, New Haven, CT. D. Beauquier and J.E. Pin. 1991. Languages and scanners. Theoretical Computer Science, 84:3–21. Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth. 1989. Learnability and the Vapnik-Chervonenkis dimension. J. ACM, 36(4):929–965. J.A. Brzozowski and I. Simon. 1973. Characterization of locally testable events. Discrete Math, 4:243– 271. J.A. Brzozowski. 1962. Canonical regular expressions and minimal state graphs for definite events. In Mathematical Theory of Automata, pages 529–561. New York. Nicola Cancedda, Eric Gaussier, Cyril Goutte, and Jean-Michel Renders. 2003. Word-sequence kernels. Journal of Machine Learning Research, 3:1059–1082. Pascal Caron. 2000. Families of locally testable languages. Theoretical Computer Science, 242:361– 376. John Case and Sam Moelius. 2007. Parallelism increases iterative learning power. In 18th Annual Conference on Algorithmic Learning Theory (ALT07), volume 4754 of Lecture Notes in Artificial Intelligence, pages 49–63. Springer-Verlag, Berlin. John Case, Sanjay Jain, Steffen Lange, and Thomas Zeugmann. 1999. Incremental concept learning for bounded data mining. Information and Computation, 152:74–110. Kyle E. Chambers, Kristine H. Onishi, and Cynthia Fisher. 2002. Learning phonotactic constraints from brief auditory experience. Cognition, 83:B13–B23. Alexander Clark, Christophe Costa Florˆencio, and Chris Watkins. 2006a. Languages as hyperplanes: grammatical inference with string kernels. In Proceedings of the European Conference on Machine Learning (ECML), pages 90–101. Alexander Clark, Christophe Costa Florˆencio, Chris Watkins, and Mariette Serayet. 2006b. Planar languages and learnability. In Proceedings of the 8th International Colloquium on Grammatical Inference (ICGI), pages 148–160. Alejandrina Cristi´a and Amanda Seidl. 2008. Phonological features in infants phonotactic learning: Evidence from artificial grammar learning. Language, Learning, and Development, 4(3):203–227. Colin de la Higuera. 1997. Characteristic sets for polynomial grammatical inference. Machine Learning, 27:125–138. Matt Edlefsen, Dylan Leeman, Nathan Myers, Nathaniel Smith, Molly Visscher, and David Wellcome. 2008. Deciding strictly local (SL) languages. In Jon Breitenbucher, editor, Proceedings of the Midstates Conference for Undergraduate Research in Computer Science and Mathematics, pages 66–73. Henning Fernau. 2003. Identification of function distinguishable languages. Theoretical Computer Science, 290:1679–1711. C.R. Gallistel and Adam Philip King. 2009. Memory and the Computational Brain. Wiley-Blackwell. Pedro Garc´ıa and Jos´e Ruiz. 1996. Learning kpiecewise testable languages from positive data. In Laurent Miclet and Colin de la Higuera, editors, Grammatical Interference: Learning Syntax from Sentences, volume 1147 of Lecture Notes in Computer Science, pages 203–210. Springer. Pedro Garc´ıa and Jos´e Ruiz. 2004. Learning k-testable and k-piecewise testable languages from positive data. Grammars, 7:125–140. Pedro Garcia, Enrique Vidal, and Jos´e Oncina. 1990. Learning locally testable languages in the strict sense. In Proceedings of the Workshop on Algorithmic Learning Theory, pages 325–338. E.M. Gold. 1967. Language identification in the limit. Information and Control, 10:447–474. J. Grainger and C. Whitney. 2004. Does the huamn mnid raed wrods as a wlohe? Trends in Cognitive Science, 8:58–59. 905 Jeffrey Heinz and James Rogers. 2010. Estimating strictly piecewise distributions. In Proceedings of the ACL. Jeffrey Heinz. 2007. The Inductive Learning of Phonotactic Patterns. Ph.D. thesis, University of California, Los Angeles. Jeffrey Heinz. 2009. On the role of locality in learning stress patterns. Phonology, 26(2):303–351. Jeffrey Heinz. to appear. Learning long distance phonotactics. Linguistic Inquiry. J. J. Horning. 1969. A Study of Grammatical Inference. Ph.D. thesis, Stanford University. Sanjay Jain, Daniel Osherson, James S. Royer, and Arun Sharma. 1999. Systems That Learn: An Introduction to Learning Theory (Learning, Development and Conceptual Change). The MIT Press, 2nd edition. Sanjay Jain, Steffen Lange, and Sandra Zilles. 2007. Some natural conditions on incremental learning. Information and Computation, 205(11):1671–1684. Daniel Jurafsky and James Martin. 2008. Speech and Language Processing: An Introduction to Natural Language Processing, Speech Recognition, and Computational Linguistics. Prentice-Hall, Upper Saddle River, NJ, 2nd edition. Anna Kasprzik and Timo K¨otzing. to appear. String extension learning using lattices. In Proceedings of the 4th International Conference on Language and Automata Theory and Applications (LATA 2010), Trier, Germany. S.M. Kim, R. McNaughton, and R. McCloskey. 1991. A polynomial time algorithm for the local testability problem of deterministic finite automata. IEEE Trans. Comput., 40(10):1087–1093. Leonid (Aryeh) Kontorovich, Corinna Cortes, and Mehryar Mohri. 2008. Kernel methods for learning languages. Theoretical Computer Science, 405(3):223 – 236. Algorithmic Learning Theory. Steffen Lange, Thomas Zeugmann, and Sandra Zilles. 2008. Learning indexed families of recursive languages from positive data: A survey. Theoretical Computer Science, 397:194–232. H. Lodhi, N. Cristianini, J. Shawe-Taylor, and C. Watkins. 2002. Text classification using string kernels. Journal of Machine Language Research, 2:419–444. M. Lothaire, editor. 2005. Applied Combinatorics on Words. Cmbridge University Press, 2nd edition. Robert McNaughton and Seymour Papert. 1971. Counter-Free Automata. MIT Press. R. McNaughton. 1974. Algebraic decision procedures for local testability. Math. Systems Theory, 8:60–76. Kristine H. Onishi, Kyle E. Chambers, and Cynthia Fisher. 2003. Infants learn phonotactic regularities from brief auditory experience. Cognition, 87:B69– B77. R. J. Parikh. 1966. On context-free languages. Journal of the ACM, 13, 570581., 13:570–581. James Rogers and Geoffrey Pullum. to appear. Aural pattern recognition experiments and the subregular hierarchy. Journal of Logic, Language and Information. James Rogers, Jeffrey Heinz, Gil Bailey, Matt Edlefsen, Molly Visscher, David Wellcome, and Sean Wibel. 2009. On languages piecewise testable in the strict sense. In Proceedings of the 11th Meeting of the Assocation for Mathematics of Language. John Shawe-Taylor and Nello Christianini. 2005. Kernel Methods for Pattern Analysis. Cambridge University Press. Imre Simon. 1975. Piecewise testable events. In Automata Theory and Formal Languages, pages 214– 222. Imre Simon. 1993. The product of rational languages. In ICALP ’93: Proceedings of the 20th International Colloquium on Automata, Languages and Programming, pages 430–444, London, UK. Springer-Verlag. Carol Whitney and Piers Cornelissen. 2008. SERIOL reading. Language and Cognitive Processes, 23:143–164. Carol Whitney. 2001. How the brain encodes the order of letters in a printed word: the SERIOL model and selective literature review. Psychonomic Bulletin Review, 8:221–243. 906
2010
92
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 907–916, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Compositional Matrix-Space Models of Language Sebastian Rudolph Karlsruhe Institute of Technology Karlsruhe, Germany [email protected] Eugenie Giesbrecht FZI Forschungszentrum Informatik Karlsuhe, Germany [email protected] Abstract We propose CMSMs, a novel type of generic compositional models for syntactic and semantic aspects of natural language, based on matrix multiplication. We argue for the structural and cognitive plausibility of this model and show that it is able to cover and combine various common compositional NLP approaches ranging from statistical word space models to symbolic grammar formalisms. 1 Introduction In computational linguistics and information retrieval, Vector Space Models (Salton et al., 1975) and its variations – such as Word Space Models (Schütze, 1993), Hyperspace Analogue to Language (Lund and Burgess, 1996), or Latent Semantic Analysis (Deerwester et al., 1990) – have become a mainstream paradigm for text representation. Vector Space Models (VSMs) have been empirically justified by results from cognitive science (Gärdenfors, 2000). They embody the distributional hypothesis of meaning (Firth, 1957), according to which the meaning of words is defined by contexts in which they (co-)occur. Depending on the specific model employed, these contexts can be either local (the co-occurring words), or global (a sentence or a paragraph or the whole document). Indeed, VSMs proved to perform well in a number of tasks requiring computation of semantic relatedness between words, such as synonymy identification (Landauer and Dumais, 1997), automatic thesaurus construction (Grefenstette, 1994), semantic priming, and word sense disambiguation (Padó and Lapata, 2007). Until recently, little attention has been paid to the task of modeling more complex conceptual structures with such models, which constitutes a crucial barrier for semantic vector models on the way to model language (Widdows, 2008). An emerging area of research receiving more and more attention among the advocates of distributional models addresses the methods, algorithms, and evaluation strategies for representing compositional aspects of language within a VSM framework. This requires novel modeling paradigms, as most VSMs have been predominantly used for meaning representation of single words and the key problem of common bag-of-words-based VSMs is that word order information and thereby the structure of the language is lost. There are approaches under way to work out a combined framework for meaning representation using both the advantages of symbolic and distributional methods. Clark and Pulman (2007) suggest a conceptual model which unites symbolic and distributional representations by means of traversing the parse tree of a sentence and applying a tensor product for combining vectors of the meanings of words with the vectors of their roles. The model is further elaborated by Clark et al. (2008). To overcome the aforementioned difficulties with VSMs and work towards a tight integration of symbolic and distributional approaches, we propose a Compositional Matrix-Space Model (CMSM) which employs matrices instead of vectors and makes use of matrix multiplication as the one and only composition operation. The paper is structured as follows: We start by providing the necessary basic notions in linear algebra in Section 2. In Section 3, we give a formal account of the concept of compositionality, introduce our model, and argue for the plausibility of CMSMs in the light of structural and cognitive considerations. Section 4 shows how common VSM approaches to compositionality can be captured by CMSMs while Section 5 illustrates the capabilities of our model to likewise cover symbolic approaches. In Section 6, we demonstrate 907 how several CMSMs can be combined into one model. We provide an overview of related work in Section 7 before we conclude and point out avenues for further research in Section 8. 2 Preliminaries In this section, we recap some aspects of linear algebra to the extent needed for our considerations about CMSMs. For a more thorough treatise we refer the reader to a linear algebra textbook (such as Strang (1993)). Vectors. Given a natural number n, an ndimensional vector v over the reals can be seen as a list (or tuple) containing n real numbers r1, . . . , rn ∈R, written v = (r1 r2 · · · rn). Vectors will be denoted by lowercase bold font letters and we will use the notation v(i) to refer to the ith entry of vector v. As usual, we write Rn to denote the set of all n-dimensional vectors with real entries. Vectors can be added entrywise, i.e., (r1 · · · rn) + (r′ 1 · · · r′ n) = (r1+ r′ 1 · · · rn+r′ n). Likewise, the entry-wise product (also known as Hadamard product) is defined by (r1 · · · rn) ⊙(r′ 1 · · · r′ n) = (r1·r′ 1 · · · rn·r′ n). Matrices. Given two real numbers n, m, an n×m matrix over the reals is an array of real numbers with n rows and m columns. We will use capital letters to denote matrices and, given a matrix M we will write M(i, j) to refer to the entry in the ith row and the jth column: M =  M(1, 1) M(1, 2) · · · M(1, j) · · · M(1, m) M(2, 1) M(2, 2) ... ... ... M(i, 1) M(i, j) ... ... ... M(n, 1) M(1, 2) · · · · · · · · · M(n, m)  The set of all n × m matrices with real number entries is denoted by Rn×m. Obviously, mdimensional vectors can be seen as 1 × m matrices. A matrix can be transposed by exchanging columns and rows: given the n × m matrix M, its transposed version MT is a m × n matrix defined by MT(i, j) = M(j, i). Linear Mappings. Beyond being merely arraylike data structures, matrices correspond to certain type of functions, so-called linear mappings, having vectors as in- and output. More precisely, an n × m matrix M applied to an m-dimensional vector v yields an n-dimensional vector v′ (written: vM = v′) according to v′(i) = m X j=1 v(j) · M(i, j) Linear mappings can be concatenated, giving rise to the notion of standard matrix multiplication: we write M1M2 to denote the matrix that corresponds to the linear mapping defined by applying first M1 and then M2. Formally, the matrix product of the n×l matrix M1 and the l×m matrix M2 is an n × m matrix M = M1M2 defined by M(i, j) = lX k=1 M1(i, k) · M2(k, j) Note that the matrix product is associative (i.e., (M1M2)M3 = M1(M2M3) always holds, thus parentheses can be omitted) but not commutative (M1M2 = M2M1 does not hold in general, i.e., the order matters). Permutations. Given a natural number n, a permutation on {1 . . . n} is a bijection (i.e., a mapping that is one-to-one and onto) Φ : {1 . . . n} → {1 . . . n}. A permutation can be seen as a “reordering scheme” on a list with n elements: the element at position i will get the new position Φ(i) in the reordered list. Likewise, a permutation can be applied to a vector resulting in a rearrangement of the entries. We write Φn to denote the permutation corresponding to the n-fold application of Φ and Φ−1 to denote the permutation that “undoes” Φ. Given a permutation Φ, the corresponding permutation matrix MΦ is defined by MΦ(i, j) = ( 1 if Φ(j) = i, 0 otherwise. Then, obviously permuting a vector according to Φ can be expressed in terms of matrix multiplication as well as we obtain for any vector v ∈Rn: Φ(v) = vMΦ Likewise, iterated application (Φn) and the inverses Φ−n carry over naturally to the corresponding notions in matrices. 908 3 Compositionality and Matrices The underlying principle of compositional semantics is that the meaning of a sentence (or a word phrase) can be derived from the meaning of its constituent tokens by applying a composition operation. More formally, the underlying idea can be described as follows: given a mapping [[ · ]] : Σ →S from a set of tokens (words) Σ into some semantical space S (the elements of which we will simply call “meanings”), we find a semantic composition operation ▷◁: S∗→S mapping sequences of meanings to meanings such that the meaning of a sequence of tokens σ1σ2 . . . σn can be obtained by applying ▷◁to the sequence [[σ1]][[σ2]] . . . [[σn]]. This situation qualifies [[ · ]] as a homomorphism between (Σ∗, ·) and (S, ▷◁) and can be displayed as follows: σ1 [[ ·]]  concatenation · ' σ2 [[ ·]]  ( · · · σn [[ ·]]  ) σ1σ2 . . . σn [[ ·]]  [[σ1]] composition ▷◁ 6 [[σ2]] 5 · · · [[σn]] 5 [[σ1σ2 . . . σn]] A great variety of linguistic models are subsumed by this general idea ranging from purely symbolic approaches (like type systems and categorial grammars) to rather statistical models (like vector space and word space models). At the first glance, the underlying encodings of word semantics as well as the composition operations differ significantly. However, we argue that a great variety of them can be incorporated – and even freely inter-combined – into a unified model where the semantics of simple tokens and complex phrases is expressed by matrices and the composition operation is standard matrix multiplication. More precisely, in Compositional Marix-Space Models, we have S = Rn×n, i.e. the semantical space consists of quadratic matrices, and the composition operator ▷◁coincides with matrix multiplication as introduced in Section 2. In the following, we will provide diverse arguments illustrating that CMSMs are intuitive and natural. 3.1 Algebraic Plausibility – Structural Operation Properties Most linear-algebra-based operations that have been proposed to model composition in language models are associative and commutative. Thereby, they realize a multiset (or bag-of-words) semantics that makes them insensitive to structural differences of phrases conveyed through word order. While associativity seems somewhat acceptable and could be defended by pointing to the streamlike, sequential nature of language, commutativity seems way less justifiable, arguably. As mentioned before, matrix multiplication is associative but non-commutative, whence we propose it as more adequate for modeling compositional semantics of language. 3.2 Neurological Plausibility – Progression of Mental States From a very abstract and simplified perspective, CMSMs can also be justified neurologically. Suppose the mental state of a person at one specific moment in time can be encoded by a vector v of numerical values; one might, e.g., think of the level of excitation of neurons. Then, an external stimulus or signal, such as a perceived word, will result in a change of the mental state. Thus, the external stimulus can be seen as a function being applied to v yielding as result the vector v′ that corresponds to the persons mental state after receiving the signal. Therefore, it seems sensible to associate with every signal (in our case: token σ) a respective function (a linear mapping, represented by a matrix M = [[σ]] that maps mental states to mental states (i.e. vectors v to vectors v′ = vM). Consequently, the subsequent reception of inputs σ, σ′ associated to matrices M and M′ will transform a mental vector v into the vector (vM)M′ which by associativity equals v(MM′). Therefore, MM′ represents the mental state transition triggered by the signal sequence σσ′. Naturally, this consideration carries over to sequences of arbitrary length. This way, abstracting from specific initial mental state vectors, our semantic space S can be seen as a function space of mental transformations represented by matrices, whereby matrix multiplication realizes subsequent execution of those transformations triggered by the input token sequence. 909 3.3 Psychological Plausibility – Operations on Working Memory A structurally very similar argument can be provided on another cognitive explanatory level. There have been extensive studies about human language processing justifying the hypothesis of a working memory (Baddeley, 2003). The mental state vector can be seen as representation of a person’s working memory which gets transformed by external input. Note that matrices can perform standard memory operations such as storing, deleting, copying etc. For instance, the matrix Mcopy(k,l) defined by Mcopy(k,l)(i, j) = ( 1 if i = j , l or i = k, j = l, 0 otherwise. applied to a vector v, will copy its kth entry to the lth position. This mechanism of storage and insertion can, e.g., be used to simulate simple forms of anaphora resolution. 4 CMSMs Encode Vector Space Models In VSMs numerous vector operations have been used to model composition (Widdows, 2008), some of the more advanced ones being related to quantum mechanics. We show how these common composition operators can be modeled by CMSMs.1 Given a vector composition operation ▷◁: Rn×Rn →Rn, we provide a surjective function ψ▷◁: Rn →Rn′×n′ that translates the vector representation into a matrix representation in a way such that for all v1, . . . vk ∈Rn holds v1 ▷◁. . . ▷◁vk = ψ−1 ▷◁(ψ▷◁(v1) . . . ψ▷◁(vk)) where ψ▷◁(vi)ψ▷◁(vj) denotes matrix multiplication of the matrices assigned to vi and vj. 4.1 Vector Addition As a simple basic model for semantic composition, vector addition has been proposed. Thereby, tokens σ get assigned (usually high-dimensional) vectors vσ and to obtain a representation of the meaning of a phrase or a sentence w = σ1 . . . σk, the vector sum of the vectors associated to the constituent tokens is calculated: vw = Pk i=1 vσi . 1In our investigations we will focus on VSM composition operations which preserve the format (i.e. which yield a vector of the same dimensionality), as our notion of compositionality requires models that allow for iterated composition. In particular, this rules out dot product and tensor product. However the convolution product can be seen as a condensed version of the tensor product. This kind of composition operation is subsumed by CMSMs; suppose in the original model, a token σ gets assigned the vector vσ, then by defining ψ+(vσ) =  1 · · · 0 0 ... ... ... 0 1 0 vσ 1  (mapping n-dimensional vectors to (n+1)×(n+1) matrices), we obtain for a phrase w = σ1 . . . σk ψ−1 + (ψ+(vσ1) . . . ψ+(vσk)) = vσ1 + . . . + vσk = vw. Proof. By induction on k. For k = 1, we have vw = vσ = ψ−1 + (ψ+(vσ1)). For k > 1, we have ψ−1 + (ψ+(vσ1) . . . ψ+(vσk−1)ψ+(vσk)) = ψ−1 + (ψ+(ψ−1 + (ψ+(vσ1) . . . ψ+(vσk−1)))ψ+(vσk)) i.h.= ψ−1 + (ψ+(Pk−1 i=1 vσi)ψ+(vσk)) =ψ−1 +   1 · · · 0 0 ... ... ... 0 1 0 Pk−1 i=1 vσi(1)· · · Pk−1 i=1 vσi(n) 1   1 · · · 0 0 ... ... ... 0 1 0 vσk(1)· · · vσk(n) 1   =ψ−1 +  1 · · · 0 0 ... ... ... 0 1 0 Pk i=1vσi(1)· · · Pk i=1vσi(n) 1  = kX i=1 vσi q.e.d.2 4.2 Component-wise Multiplication On the other hand, the Hadamard product (also called entry-wise product, denoted by ⊙) has been proposed as an alternative way of semantically composing token vectors. By using a different encoding into matrices, CMSMs can simulate this type of composition operation as well. By letting ψ⊙(vσ) =  vσ(1) 0 · · · 0 0 vσ(2) ... ... 0 0 · · · 0 vσ(n)  , we obtain an n×n matrix representation for which ψ−1 ⊙(ψ⊙(vσ1) . . . ψ⊙(vσk)) = vσ1 ⊙. . . ⊙vσk = vw. 4.3 Holographic Reduced Representations Holographic reduced representations as introduced by Plate (1995) can be seen as a refinement 2The proofs for the respective correspondences for ⊙and ⊛as well as the permutation-based approach in the following sections are structurally analog, hence, we will omit them for space reasons. 910 of convolution products with the benefit of preserving dimensionality: given two vectors v1, v2 ∈ Rn, their circular convolution product v1 ⊛v2 is again an n-dimensional vector v3 defined by v3(i + 1) = n−1 X k=0 v1(k + 1) · v2((i −k mod n) + 1) for 0 ≤i ≤n−1. Now let ψ⊛(v) be the n×n matrix M with M(i, j) = v(( j −i mod n) + 1). In the 3-dimensional case, this would result in ψ⊛(v(1) v(2) v(3)) =  v(1) v(2) v(3) v(3) v(1) v(2) v(2) v(3) v(1)  Then, it can be readily checked that ψ−1 ⊛(ψ⊛(vσ1) . . . ψ⊛(vσk)) = vσ1 ⊛. . . ⊛vσk = vw. 4.4 Permutation-based Approaches Sahlgren et al. (2008) use permutations on vectors to account for word order. In this approach, given a token σm occurring in a sentence w = σ1 . . . σk with predefined “uncontextualized” vectors vσ1 . . . vσk, we compute the contextualized vector vw,m for σm by vw,m = Φ1−m(vσ1) + . . . + Φk−m(vσk), which can be equivalently transformed into Φ1−mvσ1 + Φ(. . . + Φ(vσk−1 + (Φ(vσk))) . . .). Note that the approach is still token-centered, i.e., a vector representation of a token is endowed with contextual representations of surrounding tokens. Nevertheless, this setting can be transferred to a CMSM setting by recording the position of the focused token as an additional parameter. Now, by assigning every vσ the matrix ψΦ(vσ) =  0 MΦ ... 0 vσ 1  we observe that for Mw,m := (M− Φ)m−1ψΦ(vσ1) . . . ψΦ(vσk) we have Mw,m =  0 Mk−m Φ ... 0 vw,m 1  , whence ψ−1 Φ (M− Φ)m−1ψΦ(vσ1) . . . ψΦ(vσk) = vw,m. 5 CMSMs Encode Symbolic Approaches Now we will elaborate on symbolic approaches to language, i.e., discrete grammar formalisms, and show how they can conveniently be embedded into CMSMs. This might come as a surprise, as the apparent likeness of CMSMs to vector-space models may suggest incompatibility to discrete settings. 5.1 Group Theory Group theory and grammar formalisms based on groups and pre-groups play an important role in computational linguistics (Dymetman, 1998; Lambek, 1958). From the perspective of our compositionality framework, those approaches employ a group (or pre-group) (G, ·) as semantical space S where the group operation (often written as multiplication) is used as composition operation ▷◁. According Cayley’s Theorem (Cayley, 1854), every group G is isomorphic to a permutation group on some set S . Hence, assuming finiteness of G and consequently S , we can encode group-based grammar formalisms into CMSMs in a straightforward way by using permutation matrices of size |S | × |S |. 5.2 Regular Languages Regular languages constitute a basic type of languages characterized by a symbolic formalism. We will show how to select the assignment [[ · ]] for a CMSM such that the matrix associated to a token sequence exhibits whether this sequence belongs to a given regular language, that is if it is accepted by a given finite state automaton. As usual (cf. e.g., Hopcroft and Ullman (1979)) we define a nondeterministic finite automaton A = (Q, Σ, ∆, QI, QF) with Q = {q0, . . . , qn−1} being the set of states, Σ the input alphabet, ∆⊆Q×Σ×Q the transition relation, and QI and QF being the sets of initial and final states, respectively. 911 Then we assign to every token σ ∈Σ the n × n matrix [[σ]] = M with M(i, j) = ( 1 if (qi, σ, qj) ∈∆, 0 otherwise. Hence essentially, the matrix M encodes all state transitions which can be caused by the input σ. Likewise, for a word w = σ1 . . . σk ∈Σ∗, the matrix Mw := [[σ1]] . . . [[σk]] will encode all state transitions mediated by w. Finally, if we define vectors vI and vF by vI(i) = ( 1 if qi ∈QI, 0 otherwise, vF(i) = ( 1 if qi ∈QF, 0 otherwise, then we find that w is accepted by A exactly if vIMwvT F ≥1. 5.3 The General Case: Matrix Grammars Motivated by the above findings, we now define a general notion of matrix grammars as follows: Definition 1 Let Σ be an alphabet. A matrix grammar M of degree n is defined as the pair ⟨[[ · ]], AC⟩where [[ · ]] is a mapping from Σ to n×n matrices and AC = {⟨v′ 1, v1, r1⟩, . . . , ⟨v′ m, vm, rm⟩} with v′ 1, v1, . . . , v′ m, vm ∈Rn and r1, . . . , rm ∈R is a finite set of acceptance conditions. The language generated by M (denoted by L(M)) contains a token sequence σ1 . . . σk ∈Σ∗exactly if v′ i[[σ1]] . . . [[σk]]vT i ≥ri for all i ∈{1, . . . , m}. We will call a language L matricible if L = L(M) for some matrix grammar M. Then, the following proposition is a direct consequence from the preceding section. Proposition 1 Regular languages are matricible. However, as demonstrated by the subsequent examples, also many non-regular and even noncontext-free languages are matricible, hinting at the expressivity of our grammar model. Example 1 We define M⟨[[ · ]], AC⟩with Σ = {a, b, c} [[a]] =  3 0 0 0 0 1 0 0 0 0 3 0 0 0 0 1  [[b]] =  3 0 0 0 0 1 0 0 0 1 3 0 1 0 0 1  [[c]] =  3 0 0 0 0 1 0 0 0 2 3 0 2 0 0 1  AC = { ⟨(0 0 1 1), (1 −1 0 0), 0⟩, ⟨(0 0 1 1), (−1 1 0 0), 0⟩} Then L(M) contains exactly all palindromes from {a, b, c}∗, i.e., the words d1d2 . . . dn−1dn for which d1d2 . . . dn−1dn = dndn−1 . . . d2d1. Example 2 We define M = ⟨[[ · ]], AC⟩with Σ = {a, b, c} [[a]]=  1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 1 0 0 0 0 0 0 1  [[b]]=  0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 2 0 0 0 0 0 0 1  [[c]]=  0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 2  AC = { ⟨(1 0 0 0 0 0), (0 0 1 0 0 0), 1⟩, ⟨(0 0 0 1 1 0), (0 0 0 1 −1 0), 0⟩, ⟨(0 0 0 0 1 1), (0 0 0 0 1 −1), 0⟩, ⟨(0 0 0 1 1 0), (0 0 0 −1 0 1), 0⟩} Then L(M) is the (non-context-free) language {ambmcm | m > 0}. The following properties of matrix grammars and matricible language are straightforward. Proposition 2 All languages characterized by a set of linear equations on the letter counts are matricible. Proof. Suppose Σ = {a1, . . . an}. Given a word w, let xi denote the number of occurrences of ai in w. A linear equation on the letter counts has the form k1x1 + . . . + knxn = k k, k1, . . . , kn ∈R Now define [[ai]] = ψ+(ei), where ei is the ith unit vector, i.e. it contains a 1 at he ith position and 0 in all other positions. Then, it is easy to see that w will be mapped to M = ψ+(x1 · · · xn). Due to the fact that en+1M = (x1 · · · xn 1) we can enforce the above linear equation by defining the acceptance conditions AC = { ⟨en+1, (k1 . . . kn −k), 0⟩, ⟨−en+1, (k1 . . . kn −k), 0⟩}. q.e.d. Proposition 3 The intersection of two matricible languages is again a matricible language. Proof. This is a direct consequence of the considerations in Section 6 together with the observation, that the new set of acceptance conditions is trivially obtained from the old ones with adapted dimensionalities. q.e.d. 912 Note that the fact that the language {ambmcm | m > 0} is matricible, as demonstrated in Example 2 is a straightforward consequence of the Propositions 1, 2, and 3, since the language in question can be described as the intersection of the regular language a+b+c+ with the language characterized by the equations xa −xb = 0 and xb −xc = 0. We proceed by giving another account of the expressivity of matrix grammars by showing undecidability of the emptiness problem. Proposition 4 The problem whether there is a word which is accepted by a given matrix grammar is undecidable. Proof. The undecidable Post correspondence problem (Post, 1946) is described as follows: given two lists of words u1, . . . , un and v1, . . . , vn over some alphabet Σ′, is there a sequence of numbers h1, . . . , hm (1 ≤hj ≤n) such that uh1 . . . uhm = vh1 . . . vhm? We now reduce this problem to the emptiness problem of a matrix grammar. W.l.o.g., let Σ′ = {a1, . . . , ak}. We define a bijection # from Σ′∗to N by #(an1an2 . . . anl) = lX i=1 (ni −1) · k(l−i) Note that this is indeed a bijection and that for w1, w2 ∈Σ′∗, we have #(w1w2) = #(w1) · k|w2| + #(w2). Now, we define M as follows: Σ = {b1, . . . bn} [[bi]] =  k|ui| 0 0 0 k|vi| 0 #(ui) #(vi) 1  AC = { ⟨(0 0 1), (1 −1 0), 0⟩, ⟨(0 0 1), (−1 1 0), 0⟩} Using the above fact about # and a simple induction on m, we find that [[ah1]] . . . [[ahm]] =  k|uh1...uhm| 0 0 0 k|vh1...vhm| 0 #(uh1. . .uhm) #(vh1. . .vhm) 1  Evaluating the two acceptance conditions, we find them satisfied exactly if #(uh1 . . . uhm) = #(vh1 . . . vhm). Since # is a bijection, this is the case if and only if uh1 . . . uhm = vh1 . . . vhm. Therefore M accepts bh1 . . . bhm exactly if the sequence h1, . . . , hm is a solution to the given Post Correspondence Problem. Consequently, the question whether such a solution exists is equivalent to the question whether the language L(M) is nonempty. q.e.d. These results demonstrate that matrix grammars cover a wide range of formal languages. Nevertheless some important questions remain open and need to be clarified next: Are all context-free languages matricible? We conjecture that this is not the case.3 Note that this question is directly related to the question whether Lambek calculus can be modeled by matrix grammars. Are matricible languages closed under concatenation? That is: given two arbitrary matricible languages L1, L2, is the language L = {w1w2 | w1 ∈ L1, w2 ∈L2} again matricible? Being a property common to all language types from the Chomsky hierarchy, answering this question is surprisingly non-trivial for matrix grammars. In case of a negative answer to one of the above questions it might be worthwhile to introduce an extended notion of context grammars to accommodate those desirable properties. For example, allowing for some nondeterminism by associating several matrices to one token would ensure closure under concatenation. How do the theoretical properties of matrix grammars depend on the underlying algebraic structure? Remember that we considered matrices containing real numbers as entries. In general, matrices can be defined on top of any mathematical structure that is (at least) a semiring (Golan, 1992). Examples for semirings are the natural numbers, boolean algebras, or polynomials with natural number coefficients. Therefore, it would be interesting to investigate the influence of the choice of the underlying semiring on the properties of the matrix grammars – possibly nonstandard structures turn out to be more appropriate for capturing certain compositional language properties. 6 Combination of Different Approaches Another central advantage of the proposed matrixbased models for word meaning is that several matrix models can be easily combined into one. 3For instance, we have not been able to find a matrix grammar that recognizes the language of all well-formed parenthesis expressions. 913 Again assume a sequence w = σ1 . . . σk of tokens with associated matrices [[σ1]], . . . , [[σk]] according to one specific model and matrices ([σ1]), . . . , ([σk]) according to another. Then we can combine the two models into one {[ · ]} by assigning to σi the matrix {[σi]} =  0 · · · 0 [[σi]] ... ... 0 0 0 · · · 0 ... ... ([σi]) 0 0  By doing so, we obtain the correspondence {[σ1]} . . . {[σk]} =  0 · · · 0 [[σ1]] . . . [[σk]] ... ... 0 0 0 · · · 0 ... ... ([σ1]) . . . ([σk]) 0 0  In other words, the semantic compositions belonging to two CMSMs can be executed “in parallel.” Mark that by providing non-zero entries for the upper right and lower left matrix part, information exchange between the two models can be easily realized. 7 Related Work We are not the first to suggest an extension of classical VSMs to matrices. Distributional models based on matrices or even higher-dimensional arrays have been proposed in information retrieval (Gao et al., 2004; Antonellis and Gallopoulos, 2006). However, to the best of our knowledge, the approach of realizing compositionality via matrix multiplication seems to be entirely original. Among the early attempts to provide more compelling combinatory functions to capture word order information and the non-commutativity of linguistic compositional operation in VSMs is the work of Kintsch (2001) who is using a more sophisticated addition function to model predicateargument structures in VSMs. Mitchell and Lapata (2008) formulate semantic composition as a function m = f(w1, w2, R, K) where R is a relation between w1 and w2 and K is additional knowledge. They evaluate the model with a number of addition and multiplication operations for vector combination on a sentence similarity task proposed by Kintsch (2001). Widdows (2008) proposes a number of more advanced vector operations well-known from quantum mechanics, such as tensor product and convolution, to model composition in vector spaces. He shows the ability of VSMs to reflect the relational and phrasal meanings on a simplified analogy task. Giesbrecht (2009) evaluates four vector composition operations (+, ⊙, tensor product, convolution) on the task of identifying multi-word units. The evaluation results of the three studies are not conclusive in terms of which vector operation performs best; the different outcomes might be attributed to the underlying word space models; e.g., the models of Widdows (2008) and Giesbrecht (2009) feature dimensionality reduction while that of Mitchell and Lapata (2008) does not. In the light of these findings, our CMSMs provide the benefit of just one composition operation that is able to mimic all the others as well as combinations thereof. 8 Conclusion and Future Work We have introduced a generic model for compositionality in language where matrices are associated with tokens and the matrix representation of a token sequence is obtained by iterated matrix multiplication. We have given algebraic, neurological, and psychological plausibility indications in favor of this choice. We have shown that the proposed model is expressive enough to cover and combine a variety of distributional and symbolic aspects of natural language. This nourishes the hope that matrix models can serve as a kind of lingua franca for compositional models. This having said, some crucial questions remain before CMSMs can be applied in practice: How to acquire CMSMs for large token sets and specific purposes? We have shown the value and expressivity of CMSMs by providing carefully hand-crafted encodings. In practical cases, however, the number of token-to-matrix assignments will be too large for this manual approach. Therefore, methods to (semi-)automatically acquire those assignments from available data are required. To this end, machine learning techniques need to be investigated with respect to their applicability to this task. Presumably, hybrid approaches have to be considered, where parts of 914 the matrix representation are learned whereas others are stipulated in advance guided by external sources (such as lexical information). In this setting, data sparsity may be overcome through tensor methods: given a set T of tokens together with the matrix assignment [[]] : T → Rn×n, this datastructure can be conceived as a 3dimensional array (also known as tensor) of size n×n×|T| wherein the single token-matrices can be found as slices. Then tensor decomposition techniques can be applied in order to find a compact representation, reduce noise, and cluster together similar tokens (Tucker, 1966; Rendle et al., 2009). First evaluation results employing this approach to the task of free associations are reported by Giesbrecht (2010). How does linearity limit the applicability of CMSMs? In Section 3, we justified our model by taking the perspective of tokens being functions which realize mental state transitions. Yet, using matrices to represent those functions restricts them to linear mappings. Although this restriction brings about benefits in terms of computability and theoretical accessibility, the limitations introduced by this assumption need to be investigated. Clearly, certain linguistic effects (like aposteriori disambiguation) cannot be modeled via linear mappings. Instead, we might need some in-between application of simple nonlinear functions in the spirit of quantum-collapsing of a "superposed" mental state (such as the winner takes it all, survival of the top-k vector entries, and so forth). Thus, another avenue of further research is to generalize from the linear approach. Acknowledgements This work was supported by the German Research Foundation (DFG) under the Multipla project (grant 38457858) as well as by the German Federal Ministry of Economics (BMWi) under the project Theseus (number 01MQ07019). References [Antonellis and Gallopoulos2006] Ioannis Antonellis and Efstratios Gallopoulos. 2006. Exploring term-document matrices from matrix models in text mining. CoRR, abs/cs/0602076. [Baddeley2003] Alan D. Baddeley. 2003. Working memory and language: An overview. Journal of Communication Disorder, 36:198–208. [Cayley1854] Arthur Cayley. 1854. On the theory of groups as depending on the symbolic equation θn = 1. Philos. Magazine, 7:40–47. [Clark and Pulman2007] Stephen Clark and Stephen Pulman. 2007. Combining symbolic and distributional models of meaning. In Proceedings of the AAAI Spring Symposium on Quantum Interaction, Stanford, CA, 2007, pages 52–55. [Clark et al.2008] Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A compositional distributional model of meaning. In Proceedings of the Second Symposium on Quantum Interaction (QI2008), pages 133–140. [Deerwester et al.1990] Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41:391–407. [Dymetman1998] Marc Dymetman. 1998. Group theory and computational linguistics. J. of Logic, Lang. and Inf., 7(4):461–497. [Firth1957] John R. Firth. 1957. A synopsis of linguistic theory 1930-55. Studies in linguistic analysis, pages 1–32. [Gao et al.2004] Kai Gao, Yongcheng Wang, and Zhiqi Wang. 2004. An efficient relevant evaluation model in information retrieval and its application. In CIT ’04: Proceedings of the The Fourth International Conference on Computer and Information Technology, pages 845–850. IEEE Computer Society. [Gärdenfors2000] Peter Gärdenfors. 2000. Conceptual Spaces: The Geometry of Thought. MIT Press, Cambridge, MA, USA. [Giesbrecht2009] Eugenie Giesbrecht. 2009. In search of semantic compositionality in vector spaces. In Sebastian Rudolph, Frithjof Dau, and Sergei O. Kuznetsov, editors, ICCS, volume 5662 of Lecture Notes in Computer Science, pages 173–184. Springer. [Giesbrecht2010] Eugenie Giesbrecht. 2010. Towards a matrix-based distributional model of meaning. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Student Research Workshop. ACL. [Golan1992] Jonathan S. Golan. 1992. The theory of semirings with applications in mathematics and theoretical computer science. Addison-Wesley Longman Ltd. [Grefenstette1994] Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Springer. 915 [Hopcroft and Ullman1979] John E. Hopcroft and Jeffrey D. Ullman. 1979. Introduction to Automata Theory, Languages and Computation. AddisonWesley. [Kintsch2001] Walter Kintsch. 2001. Predication. Cognitive Science, 25:173–202. [Lambek1958] Joachim Lambek. 1958. The mathematics of sentence structure. The American Mathematical Monthly, 65(3):154–170. [Landauer and Dumais1997] Thomas K. Landauer and Susan T. Dumais. 1997. Solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychological Review, (104). [Lund and Burgess1996] Kevin Lund and Curt Burgess. 1996. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instrumentation, and Computers, 28:203– 208. [Mitchell and Lapata2008] JeffMitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236–244. ACL. [Padó and Lapata2007] Sebastian Padó and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. [Plate1995] Tony Plate. 1995. Holographic reduced representations. IEEE Transactions on Neural Networks, 6(3):623–641. [Post1946] Emil L. Post. 1946. A variant of a recursively unsolvable problem. Bulletin of the American Mathematical Society, 52:264–268. [Rendle et al.2009] Steffen Rendle, Leandro Balby Marinho, Alexandros Nanopoulos, and Lars Schmidt-Thieme. 2009. Learning optimal ranking with tensor factorization for tag recommendation. In John F. Elder IV, Françoise Fogelman-Soulié, Peter A. Flach, and Mohammed Javeed Zaki, editors, KDD, pages 727–736. ACM. [Sahlgren et al.2008] Magnus Sahlgren, Anders Holst, and Pentti Kanerva. 2008. Permutations as a means to encode order in word space. In Proc. CogSci’08, pages 1300–1305. [Salton et al.1975] Gerard Salton, Anita Wong, and Chung-Shu Yang. 1975. A vector space model for automatic indexing. Commun. ACM, 18(11):613– 620. [Schütze1993] Hinrich Schütze. 1993. Word space. In Lee C. Giles, Stephen J. Hanson, and Jack D. Cowan, editors, Advances in Neural Information Processing Systems 5, pages 895–902. MorganKaufmann. [Strang1993] Gilbert Strang. 1993. Introduction to Linear Algebra. Wellesley-Cambridge Press. [Tucker1966] Ledyard R. Tucker. 1966. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3). [Widdows2008] Dominic Widdows. 2008. Semantic vector products: some initial investigations. In Proceedings of the Second AAAI Symposium on Quantum Interaction. 916
2010
93
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 917–926, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Cross-Language Document Summarization Based on Machine Translation Quality Prediction Xiaojun Wan, Huiying Li and Jianguo Xiao Institute of Compute Science and Technology, Peking University, Beijing 100871, China Key Laboratory of Computational Linguistics (Peking University), MOE, China {wanxiaojun,lihuiying,xiaojianguo}@icst.pku.edu.cn Abstract Cross-language document summarization is a task of producing a summary in one language for a document set in a different language. Existing methods simply use machine translation for document translation or summary translation. However, current machine translation services are far from satisfactory, which results in that the quality of the cross-language summary is usually very poor, both in readability and content. In this paper, we propose to consider the translation quality of each sentence in the English-to-Chinese cross-language summarization process. First, the translation quality of each English sentence in the document set is predicted with the SVM regression method, and then the quality score of each sentence is incorporated into the summarization process. Finally, the English sentences with high translation quality and high informativeness are selected and translated to form the Chinese summary. Experimental results demonstrate the effectiveness and usefulness of the proposed approach. 1 Introduction Given a document or document set in one source language, cross-language document summarization aims to produce a summary in a different target language. In this study, we focus on English-to-Chinese document summarization for the purpose of helping Chinese readers to quickly understand the major content of an English document or document set. This task is very important in the field of multilingual information access. Till now, most previous work focuses on monolingual document summarization, but cross-language document summarization has received little attention in the past years. A straightforward way for cross-language document summarization is to translate the summary from the source language to the target language by using machine translation services. However, though machine translation techniques have been advanced a lot, the machine translation quality is far from satisfactory, and in many cases, the translated texts are hard to understand. Therefore, the translated summary is likely to be hard to understand by readers, i.e., the summary quality is likely to be very poor. For example, the translated Chinese sentence for an ordinary English sentence (“It is also Mr Baker who is making the most of presidential powers to dispense largesse.”) by using Google Translate is “同时,也 是贝克是谁提出了对总统权力免除最慷慨。”. The translated sentence is hard to understand because it contains incorrect translations and it is very disfluent. If such sentences are selected into the summary, the quality of the summary would be very poor. In order to address the above problem, we propose to consider the translation quality of the English sentences in the summarization process. In particular, the translation quality of each English sentence is predicted by using the SVM regression method, and then the predicted MT quality score of each sentence is incorporated into the sentence evaluation process, and finally both informative and easy-to-translate sentences are selected and translated to form the Chinese summary. An empirical evaluation is conducted to evaluate the performance of machine translation quality prediction, and a user study is performed to evaluate the cross-language summary quality. The results demonstrate the effectiveness of the proposed approach. The rest of this paper is organized as follows: Section 2 introduces related work. The system is overviewed in Section 3. In Sections 4 and 5, we present the detailed algorithms and evaluation 917 results of machine translation quality prediction and cross-language summarization, respectively. We discuss in Section 6 and conclude this paper in Section 7. 2 Related Work 2.1 Machine Translation Quality Prediction Machine translation evaluation aims to assess the correctness and quality of the translation. Usually, the human reference translation is provided, and various methods and metrics have been developed for comparing the system-translated text and the human reference text. For example, the BLEU metric, the NIST metric and their relatives are all based on the idea that the more shared substrings the system-translated text has with the human reference translation, the better the translation is. Blatz et al. (2003) investigate training sentence-level confidence measures using a variety of fuzzy match scores. Albrecht and Hwa (2007) rely on regression algorithms and reference-based features to measure the quality of sentences. Transition evaluation without using reference translations has also been investigated. Quirk (2004) presents a supervised method for training a sentence level confidence measure on translation output using a human-annotated corpus. Features derived from the source sentence and the target sentence (e.g. sentence length, perplexity, etc.) and features about the translation process are leveraged. Gamon et al. (2005) investigate the possibility of evaluating MT quality and fluency at the sentence level in the absence of reference translations, and they can improve on the correlation between language model perplexity scores and human judgment by combing these perplexity scores with class probabilities from a machine-learned classifier. Specia et al. (2009) use the ICM theory to identify the threshold to map a continuous predicted score into “good” or “bad” categories. Chae and Nenkova (2009) use surface syntactic features to assess the fluency of machine translation results. In this study, we further predict the translation quality of an English sentence before the machine translation process, i.e., we do not leverage reference translation and the target sentence. 2.2 Document Summarization Document summarization methods can be generally categorized into extraction-based methods and abstraction-based methods. In this paper, we focus on extraction-based methods. Extractionbased summarization methods usually assign each sentence a saliency score and then rank the sentences in a document or document set. For single document summarization, the sentence score is usually computed by empirical combination of a number of statistical and linguistic feature values, such as term frequency, sentence position, cue words, stigma words, topic signature (Luhn 1969; Lin and Hovy, 2000). The summary sentences can also be selected by using machine learning methods (Kupiec et al., 1995; Amini and Gallinari, 2002) or graph-based methods (ErKan and Radev, 2004; Mihalcea and Tarau, 2004). Other methods include mutual reinforcement principle (Zha 2002; Wan et al., 2007). For multi-document summarization, the centroid-based method (Radev et al., 2004) is a typical method, and it scores sentences based on cluster centroids, position and TFIDF features. NeATS (Lin and Hovy, 2002) makes use of new features such as topic signature to select important sentences. Machine Learning based approaches have also been proposed for combining various sentence features (Wong et al., 2008). The influences of input difficulty on summarization performance have been investigated in (Nenkova and Louis, 2008). Graph-based methods have also been used to rank sentences in a document set. For example, Mihalcea and Tarau (2005) extend the TextRank algorithm to compute sentence importance in a document set. Cluster-level information has been incorporated in the graph model to better evaluate sentences (Wan and Yang, 2008). Topic-focused or query biased multi-document summarization has also been investigated (Wan et al., 2006). Wan et al. (2010) propose the EUSUM system for extracting easy-to-understand English summaries for non-native readers. Several pilot studies have been performed for the cross-language summarization task by simply using document translation or summary translation. Leuski et al. (2003) use machine translation for English headline generation for Hindi documents. Lim et al. (2004) propose to generate a Japanese summary without using a Japanese summarization system, by first translating Japanese documents into Korean documents, and then extracting summary sentences by using Korean summarizer, and finally mapping Korean summary sentences to Japanese summary sentences. Chalendar et al. (2005) focuses on semantic analysis and sentence generation techniques for cross-language summarization. Orasan 918 and Chiorean (2008) propose to produce summaries with the MMR method from Romanian news articles and then automatically translate the summaries into English. Cross language query based summarization has been investigated in (Pingali et al., 2007), where the query and the documents are in different languages. Other related work includes multilingual summarization (Lin et al., 2005), which aims to create summaries from multiple sources in multiple languages. Siddharthan and McKeown (2005) use the information redundancy in multilingual input to correct errors in machine translation and thus improve the quality of multilingual summaries. 3 The Proposed Approach Previous methods for cross-language summarization usually consist of two steps: one step for summarization and one step for translation. Different order of the two steps can lead to the following two basic English-to-Chinese summarization methods: Late Translation (LateTrans): Firstly, an English summary is produced for the English document set by using existing summarization methods. Then, the English summary is automatically translated into the corresponding Chinese summary by using machine translation services. Early Translation (EarlyTrans): Firstly, the English documents are translated into Chinese documents by using machine translation services. Then, a Chinese summary is produced for the translated Chinese documents. Generally speaking, the LateTrans method has a few advantages over the EarlyTrans method: 1) The LateTrans method is much more efficient than the EarlyTrans method, because only a very few summary sentences are required to be translated in the LateTrans method, whereas all the sentences in the documents are required to be translated in the EarlyTrans method. 2) The LateTrans method is deemed to be more effective than the EarlyTrans method, because the translation errors of the sentences have great influences on the summary sentence extraction in the EarlyTrans method. Thus in this study, we adopt the LateTrans method as our baseline method. We also adopt the late translation strategy for our proposed approach. In the baseline method, a translated Chinese sentence is selected into the summary because the original English sentence is informative. However, an informative and fluent English sentence is likely to be translated into an uninformative and disfluent Chinese sentence, and therefore, this sentence cannot be selected into the summary. In order to address the above problem of existing methods, our proposed approach takes into account a novel factor of each sentence for crosslanguage summary extraction. Each English sentence is associated with a score indicating its translation quality. An English sentence with high translation quality score is more likely to be selected into the original English summary, and such English summary can be translated into a better Chinese summary. Figure 1 gives the architecture of our proposed approach. Figure 1: Architecture of our proposed approach Seen from the figure, our proposed approach consists of four main steps: 1) The machine translation quality score of each English sentence is predicted by using regression methods; 2) The informativeness score of each English sentence is computed by using existing methods; 3) The English summary is produced by making use of both the machine translation quality score and the informativeness score; 4) The extracted English summary is translated into Chinese summary by using machine translation services. In this study, we adopt Google Translate1 for English-to-Chinese translation. Google Translate is one of the state-of-the-art commercial machine translation systems used today. It applies statistical learning techniques to build a translation 1 http://translate.google.com/translate_t English Sentences Sentence MT Quality Prediction Sentence Informativeness Evaluation English Summary Extraction EN-to-CN Machine Translation Chinese Summary Informativeness score English summary MT quality score 919 model based on both monolingual text in the target language and aligned text consisting of examples of human translations between the languages. The first step and the evaluation results will be described in Section 4, and the other steps and the evaluation results will be described together in Section 5. 4 Machine Translation Quality Prediction 4.1 Methodology In this study, machine translation (MT) quality reflects both the translation accuracy and the fluency of the translated sentence. An English sentence with high MT quality score is likely to be translated into an accurate and fluent Chinese sentence, which can be easily read and understand by Chinese readers. The MT quality prediction is a task of mapping an English sentence to a numerical value corresponding to a quality level. The larger the value is, the more accurately and fluently the sentence can be translated into Chinese sentence. As introduced in Section 2.1, several related work has used regression and classification methods for MT quality prediction without reference translations. In our approach, the MT quality of each sentence in the documents is also predicted without reference translations. The difference between our task and previous work is that previous work can make use of both features in source sentence and features in target sentence, while our task only leverages features in source sentence, because in the late translation strategy, the English sentences in the documents have not been translated yet at this step. In this study, we adopt the ε-support vector regression (ε-SVR) method (Vapnik 1995) for the sentence-level MT quality prediction task. The SVR algorithm is firmly grounded in the framework of statistical learning theory (VC theory). The goal of a regression algorithm is to fit a flat function to the given training data points. Formally, given a set of training data points D={(xi,yi)| i=1,2,…,n}⊂Rd×R, where xi is input feature vector and yi is associated score, the goal is to fit a function f which approximates the relation inherited between the data set points. The standard form is: ∑ ∑ = = + + n i i n i i T b w C C w w 1 * 1 , , , 2 1 min * ξ ξ ξ ξ Subject to i i i T y b x f w ξ ε + ≤ − + ) ( * ) ( i i T i b x f w y ξ ε + ≤ − − . ,..., 1 ,0 , , * n i i i = ≥ ξ ξ ε The constant C>0 is a parameter for determining the trade-off between the flatness of f and the amount up to which deviations larger than ε are tolerated. In the experiments, we use the LIBSVM tool (Chang and Lin, 2001) with the RBF kernel for the task, and we use the parameter selection tool of 10-fold cross validation via grid search to find the best parameters on the training set with respect to mean squared error (MSE), and then use the best parameters to train on the whole training set. We use the following two groups of features for each sentence: the first group includes several basic features, and the second group includes several parse based features2. They are all derived based on the source English sentence. The basic features are as follows: 1) Sentence length: It refers to the number of words in the sentence. 2) Sub-sentence number: It refers to the number of sub-sentences in the sentence. We simply use the punctuation marks as indicators of sub-sentences. 3) Average sub-sentence length: It refers to the average number of words in the subsentences within the sentence. 4) Percentage of nouns and adjectives: It refers to the percentage of noun words or adjective words in the in the sentence. 5) Number of question words: It refers to the number of question words (who, whom, whose, when, where, which, how, why, what) in the sentence. We use the Stanford Lexicalized Parser (Klein and Manning, 2002) with the provided English PCFG model to parse a sentence into a parse tree. The output tree is a context-free phrase structure grammar representation of the sentence. The parse features are then selected as follows: 1) Depth of the parse tree: It refers to the depth of the generated parse tree. 2) Number of SBARs in the parse tree: SBAR is defined as a clause introduced by a (possibly empty) subordinating conjunction. It is an indictor of sentence complexity. 2 Other features, including n-gram frequency, perplexity features, etc., are not useful in our study. MT features are not used because Google Translate is used as a black box. 920 3) Number of NPs in the parse tree: It refers to the number of noun phrases in the parse tree. 4) Number of VPs in the parse tree: It refers to the number of verb phrases in the parse tree. All the above feature values are scaled by using the provided svm-scale program. At this step, each English sentence si can be associated with a MT quality score TransScore(si) predicted by the ε-SVR method. The score is finally normalized by dividing by the maximum score. 4.2 Evaluation 4.2.1 Evaluation Setup In the experiments, we first constructed the goldstandard dataset in the following way: DUC2001 provided 309 English news articles for document summarization tasks, and the articles were grouped into 30 document sets. The news articles were selected from TREC-9. We chose five document sets (d04, d05, d06, d08, d11) with 54 news articles out of the DUC2001 document sets. The documents were then split into sentences and we used 1736 sentences for evaluation. All the sentences were automatically translated into Chinese sentences by using the Google Translate service. Two Chinese college students were employed for data annotation. They read the original English sentence and the translated Chinese sentence, and then manually labeled the overall translation quality score for each sentence, separately. The translation quality is an overall measure for both the translation accuracy and the readability of the translated sentence. The score ranges between 1 and 5, and 1 means “very bad”, and 5 means “very good”, and 3 means “normal”. The correlation between the two sets of labeled scores is 0.646. The final translation quality score was the average of the scores provided by the two annotators. After annotation, we randomly separated the labeled sentence set into a training set of 1428 sentences and a test set of 308 sentences. We then used the LIBSVM tool for training and testing. Two metrics were used for evaluating the prediction results. The two metrics are as follows: Mean Square Error (MSE): This metric is a measure of how correct each of the prediction values is on average, penalizing more severe errors more heavily. Given the set of prediction scores for the test sentences: } ,... 1 | ˆ { ˆ n i y Y i = = , and the manually assigned scores for the sentences: } ,... 1 | { n i y Y i = = , the MSE of the prediction result is defined as ∑ = − = n i i i y y n Y MSE 1 2) ˆ ( 1 )ˆ ( Pearson’s Correlation Coefficient (ρ): This metric is a measure of whether the trends of prediction values matched the trends for humanlabeled data. The coefficient between Y and Yˆ is defined as y y n i i i s ns y y y y ˆ 1 )ˆ ˆ )( ( ∑ = − − = ρ where y and yˆ are the sample means of Y and Yˆ , ys and ys ˆ are the sample standard deviations of Y and Yˆ . 4.2.2 Evaluation Results Table 1 shows the prediction results. We can see that the overall results are promising. And the correlation is moderately high. The results are acceptable because we only make use of the features derived from the source sentence. The results guarantee that the use of MT quality scores in the summarization process is feasible. We can also see that both the basic features and the parse features are beneficial to the overall prediction results. Feature Set MSE ρ Basic features 0.709 0.399 Parse features 0.702 0.395 All features 0.683 0.433 Table 1: Prediction results 5 Cross-Language Document Summarization 5.1 Methodology In this section, we first compute the informativeness score for each sentence. The score reflect how the sentence expresses the major topic in the documents. Various existing methods can be used for computing the score. In this study, we adopt the centroid-based method. The centroid-based method is the algorithm used in the MEAD system. The method uses a heuristic and simple way to sum the sentence scores computed based on different features. The score for each sentence is a linear combination of 921 the weights computed based on the following three features: Centroid-based Weight. The sentences close to the centroid of the document set are usually more important than the sentences farther away. And the centroid weight C(si) of a sentence si is calculated as the cosine similarity between the sentence text and the concatenated text for the whole document set D. The weight is then normalized by dividing the maximal weight. Sentence Position. The leading several sentences of a document are usually important. So we calculate for each sentence a weight to reflect its position priority as P(si)=1-(i-1)/n, where i is the sequence of the sentence si and n is the total number of sentences in the document. Obviously, i ranges from 1 to n. First Sentence Similarity. Because the first sentence of a document is very important, a sentence similar to the first sentence is also important. Thus we use the cosine similarity value between a sentence and the corresponding first sentence in the same document as the weight F(si) for sentence si. After all the above weights are calculated for each sentence, we sum all the weights and get the overall score for the sentence as follows: ) ( ) ( ) ( ) ( i i i i s F s P s C s InfoScore ⋅ + ⋅ + ⋅ = γ β α where α, β and γ are parameters reflecting the importance of different features. We empirically set α=β=γ=1. After the informativeness scores for all sentences are computed, the score of each sentence is normalized by dividing by the maximum score. After we obtain the MT quality score and the informativeness score of each sentence in the document set, we linearly combine the two scores to get the overall score of each sentence. Formally, let TransScore(si)∈[0,1] and InfoScore(si)∈[0,1] denote the MT quality score and the informativeness score of sentence si, the overall score of the sentence is: where λ∈[0,1] is a parameter controlling the influences of the two factors. If λ is set to 0, the summary is extracted without considering the MT quality factor. In the experiments, we empirically set the parameter to 0.3 in order to balance the two factors of content informativeness and translation quality. For multi-document summarization, some sentences are highly overlapping with each other, and thus we apply the same greedy algorithm in (Wan et al., 2006) to penalize the sentences highly overlapping with other highly scored sentences, and finally the informative, novel, and easy-to-translate sentences are chosen into the English summary. Finally, the sentences in the English summary are translated into the corresponding Chinese sentences by using Google Translate, and the Chinese summary is formed. 5.2 Evaluation 5.2.1 Evaluation Setup In this experiment, we used the document sets provided by DUC2001 for evaluation. As mentioned in Section 4.2.1, DUC2001 provided 30 English document sets for generic multidocument summarization. The average document number per document set was 10. The sentences in each article have been separated and the sentence information has been stored into files. Generic reference English summaries were provided by NIST annotators for evaluation. In our study, we aimed to produce Chinese summaries for the English document sets. The summary length was limited to five sentences, i.e. each summary consisted of five sentences. The DUC2001 dataset was divided into the following two datasets: Ideal Dataset: We have manually labeled the MT quality scores for the sentences in five document sets (d04-d11), and we directly used the manually labeled scores in the summarization process. The ideal dataset contained these five document sets. Real Dataset: The MT quality scores for the sentences in the remaining 25 document sets were automatically predicted by using the learned SVM regression model. And we used the automatically predicted scores in the summarization process. The real dataset contained these 25 document sets. We performed two evaluation procedures: one based on the ideal dataset to validate the feasibility of the proposed approach, and the other based on the real dataset to demonstrate the effectiveness of the proposed approach in real applications. To date, various methods and metrics have been developed for English summary evaluation by comparing system summary with reference summary, such as the pyramid method (Nenkova et al., 2007) and the ROUGE metrics (Lin and Hovy, 2003). However, such methods or metrics cannot be directly used for evaluating Chinese summary without reference Chinese summary. ) ( ) ( ) 1( ) ( i i i s TransScore s InfoScore s re OverallSco × + × − = λ λ 922 Instead, we developed an evaluation protocol as follows: The evaluation was based on human scoring. Four Chinese college students participated in the evaluation as subjects. We have developed a friendly tool for helping the subjects to evaluate each Chinese summary from the following three aspects: Content: This aspect indicates how much a summary reflects the major content of the document set. After reading a summary, each user can select a score between 1 and 5 for the summary. 1 means “very uninformative” and 5 means “very informative”. Readability: This aspect indicates the readability level of the whole summary. After reading a summary, each user can select a score between 1 and 5 for the summary. 1 means “hard to read”, and 5 means “easy to read”. Overall: This aspect indicates the overall quality of a summary. After reading a summary, each user can select a score between 1 and 5 for the summary. 1 means “very bad”, and 5 means “very good”. We performed the evaluation procedures on the ideal dataset and the read dataset, separately. During each evaluation procedure, we compared our proposed approach (λ=0.3) with the baseline approach without considering the MT quality factor (λ=0). And the two summaries produced by the two systems for the same document set were presented in the same interface, and then the four subjects assigned scores to each summary after they read and compared the two summaries. And the assigned scores were finally averaged across the documents sets and across the subjects. 5.2.2 Evaluation Results Table 2 shows the evaluation results on the ideal dataset with 5 document sets. We can see that based on the manually labeled MT quality scores, the Chinese summaries produced by our proposed approach are significantly better than that produced by the baseline approach over all three aspects. All subjects agree that our proposed approach can produce more informative and easyto-read Chinese summaries than the baseline approach. Table 3 shows the evaluation results on the real dataset with 25 document sets. We can see that based on the automatically predicted MT quality scores, the Chinese summaries produced by our proposed approach are significantly better than that produced by the baseline approach over the readability aspect and the overall aspect. Almost all subjects agree that our proposed approach can produce more easy-to-read and highquality Chinese summaries than the baseline approach. Comparing the evaluation results in the two tables, we can find that the performance difference between the two approaches on the ideal dataset is bigger than that on the real dataset, especially on the content aspect. The results demonstrate that the more accurate the MT quality scores are, the more significant the performance improvement is. Overall, the proposed approach is effective to produce good-quality Chinese summaries for English document sets. Baseline Approach Proposed Approach content readability overall content readability overall Subject1 3.2 2.6 2.8 3.4 3.0 3.4 Subject2 3.0 3.2 3.2 3.4 3.6 3.4 Subject3 3.4 2.8 3.2 3.6 3.8 3.8 Subject4 3.2 3.0 3.2 3.8 3.8 3.8 Average 3.2 2.9 3.1 3.55* 3.55* 3.6* Table 2: Evaluation results on the ideal dataset (5 document sets) Baseline Approach Proposed Approach content readability overall content readability overall Subject1 2.64 2.56 2.60 2.80 3.24 2.96 Subject2 3.60 2.76 3.36 3.52 3.28 3.64 Subject3 3.52 3.72 3.44 3.56 3.80 3.48 Subject4 3.16 2.96 3.12 3.16 3.44 3.52 Average 3.23 3.00 3.13 3.26 3.44* 3.40* Table 3: Evaluation results on the real dataset (25 document sets) (* indicates the difference between the average score of the proposed approach and that of the baseline approach is statistically significant by using t-test.) 923 5.2.3 Example Analysis In this section, we give two running examples to better show the effectiveness of our proposed approach. The Chinese sentences and the original English sentences in the summary are presented together. The normalized MT quality score for each sentence is also given at the end of the Chinese sentence. Document set 1: D04 from the ideal dataset Summary by baseline approach: s1: 预计美国的保险公司支付,估计在佛罗里达州的73亿美元 (37亿英镑),作为安德鲁飓风的结果-迄今为止最昂贵的灾 难曾经面临产业。(0.56) (US INSURERS expect to pay out an estimated Dollars 7.3bn (Pounds 3.7bn) in Florida as a result of Hurricane Andrew - by far the costliest disaster the industry has ever faced. ) s2: 有越来越多的迹象表明安德鲁飓风,不受欢迎的,因为它 的佛罗里达和路易斯安那州的受灾居民,最后可能不伤害到连 任的布什总统竞选。(0.67) (THERE are growing signs that Hurricane Andrew, unwelcome as it was for the devastated inhabitants of Florida and Louisiana, may in the end do no harm to the re-election campaign of President George Bush.) s3: 一般事故发生后,英国著名保险公司昨日表示,保险索赔 的安德鲁飓风所引发的成本也高达4000万美元&#39;。 (0.44) (GENERAL ACCIDENT said yesterday that insurance claims arising from Hurricane Andrew could 'cost it as much as Dollars 40m'.) s4: 在巴哈马,政府发言人麦库里说,4人死亡已离岛东部群岛 报告。 (0.56) (In the Bahamas, government spokesman Mr Jimmy Curry said four deaths had been reported on outlying eastern islands.) s5: 新奥尔良的和1.6万人,是特别脆弱,因为该市位于海平面 以下,有密西西比河通过其中心的运行和一个大型湖泊立即向 北方。(0.44) (New Orleans, with a population of 1.6m, is particularly vulnerable because the city lies below sea level, has the Mississippi River running through its centre and a large lake immediately to the north.) Summary by proposed approach: s1: 预计美国的保险公司支付,估计在佛罗里达州的73亿美元 (37亿英镑),作为安德鲁飓风的结果-迄今为止最昂贵的灾 难曾经面临产业。(0.56) (US INSURERS expect to pay out an estimated Dollars 7.3bn (Pounds 3.7bn) in Florida as a result of Hurricane Andrew - by far the costliest disaster the industry has ever faced.) s2: 有越来越多的迹象表明安德鲁飓风,不受欢迎的,因为它 的佛罗里达和路易斯安那州的受灾居民,最后可能不伤害到连 任的布什总统竞选。(0.67) (THERE are growing signs that Hurricane Andrew, unwelcome as it was for the devastated inhabitants of Florida and Louisiana, may in the end do no harm to the re-election campaign of President George Bush.) s3: 在巴哈马,政府发言人麦库里说,4人死亡已离岛东部群岛 报告。(0.56) (In the Bahamas, government spokesman Mr Jimmy Curry said four deaths had been reported on outlying eastern islands.) s4: 在首当其冲的损失可能会集中在美国的保险公司,业内分 析人士昨天说。 (0.89) (The brunt of the losses are likely to be concentrated among US insurers, industry analysts said yesterday.) s5: 在北迈阿密,损害是最小的。(1.0) (In north Miami, damage is minimal.) Document set 2: D54 from the real dataset Summary by baseline approach: s1: 两个加州11月6日投票的主张,除其他限制外,全州成员及 州议员的条件。(0.57) (Two propositions on California's Nov. 6 ballot would, among other things, limit the terms of statewide officeholders and state legislators.) s2: 原因之一是任期限制将开放到现在的政治职务任职排除了 许多人的职业生涯。(0.36) (One reason is that term limits would open up politics to many people now excluded from office by career incumbents.) s3: 建议限制国会议员及州议员都很受欢迎,越来越多的条件 是,根据专家和投票。(0.20) (Proposals to limit the terms of members of Congress and of state legislators are popular and getting more so, according to the pundits and the polls.) s4: 国家法规的酒吧首先从运行时间为国会候选人已举行了加 入的资格规定了宪法规定,并已失效。(0.24) (State statutes that bar first-time candidates from running for Congress have been held to add to the qualifications set forth in the Constitution and have been invalidated.) s5: 另一个论点是,公民的同时,不断进入新的华盛顿国会将 面临流动更好的结果,比政府的任期较长的代表提供的。(0.20) (Another argument is that a citizen Congress with its continuing flow of fresh faces into Washington would result in better government than that provided by representatives with lengthy tenure.) Summary by proposed approach: s1: 两个加州11 月6 日投票的主张,除其他限制外,全州成员 及州议员的条件。(0.57) (Two propositions on California's Nov. 6 ballot would, among other things, limit the terms of statewide officeholders and state legislators.) s2: 原因之一是任期限制将开放到现在的政治职务任职排除了 许多人的职业生涯。(0.36) (One reason is that term limits would open up politics to many people now excluded from office by career incumbents.) s3: 另一个论点是,公民的同时,不断进入新的华盛顿国会将 面临流动更好的结果,比政府的任期较长的代表提供的。(0.20) (Another argument is that a citizen Congress with its continuing flow of fresh faces into Washington would result in better government than that provided by representatives with lengthy tenure.) s4: 有两个国会任期限制,经济学家,至少公共选择那些劝 说,要充分理解充分的理由。(0.39) (There are two solid reasons for congressional term limitation that economists, at least those of the public-choice persuasion, should fully appreciate.) s5: 与国会的问题的根源是,除非有重大丑闻,几乎是不可能 战胜现任。(0.47) (The root of the problems with Congress is that, barring major scandal, it is almost impossible to defeat an incumbent.) 6 Discussion In this study, we adopt the late translation strategy for cross-document summarization. As mentioned earlier, the late translation strategy has some advantages over the early translation strategy. However, in the early translation strategy, we can use the features derived from both the source English sentence and the target Chinese sentence to improve the MT quality prediction results. Overall, the framework of our proposed approach can be easily adapted for cross-document summarization with the early translation strategy. 924 And an empirical comparison between the two strategies is left as our future work. Though this study focuses on English-toChinese document summarization, crosslanguage summarization tasks for other languages can also be solved by using our proposed approach. 7 Conclusion and Future Work In this study we propose a novel approach to address the cross-language document summarization task. Our proposed approach predicts the MT quality score of each English sentence and then incorporates the score into the summarization process. The user study results verify the effectiveness of the approach. In future work, we will manually translate English reference summaries into Chinese reference summaries, and then adopt the ROUGE metrics to perform automatic evaluation of the extracted Chinese summaries by comparing them with the Chinese reference summaries. Moreover, we will further improve the sentence’s MT quality by using sentence compression or sentence reduction techniques. Acknowledgments This work was supported by NSFC (60873155), Beijing Nova Program (2008B03), NCET (NCET-08-0006), RFDP (20070001059) and National High-tech R&D Program (2008AA01Z421). We thank the students for participating in the user study. We also thank the anonymous reviewers for their useful comments. References J. Albrecht and R. Hwa. 2007. A re-examination of machine learning approaches for sentence-level mt evaluation. In Proceedings of ACL2007. M. R. Amini, P. Gallinari. 2002. The Use of Unlabeled Data to Improve Supervised Learning for Text Summarization. In Proceedings of SIGIR2002. J. Blatz, E. Fitzgerald, G. Foster, S. Gandrabur, C. Goutte, A. Kulesza, A. Sanchis, and N. Ueffing. 2003. Confidence estimation for statistical machine translation. Johns Hopkins Summer Workshop Final Report. J. Chae and A. Nenkova. 2009. Predicting the fluency of text with shallow structural features: case studies of machine translation and human-written text. In Proceedings of EACL2009. G. de Chalendar, R. Besançon, O. Ferret, G. Grefenstette, and O. Mesnard. 2005. Crosslingual summarization with thematic extraction, syntactic sentence simplification, and bilingual generation. In Workshop on Crossing Barriers in Text Summarization Research, 5th International Conference on Recent Advances in Natural Language Processing (RANLP2005). C.-C. Chang and C.-J. Lin. 2001. LIBSVM : a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm G. ErKan, D. R. Radev. LexPageRank. 2004. Prestige in Multi-Document Text Summarization. In Proceedings of EMNLP2004. M. Gamon, A. Aue, and M. Smets. 2005. Sentencelevel MT evaluation without reference translations: beyond language modeling. In Proceedings of EAMT2005. D. Klein and C. D. Manning. 2002. Fast Exact Inference with a Factored Model for Natural Language Parsing. In Proceedings of NIPS2002. J. Kupiec, J. Pedersen, F. Chen. 1995. A.Trainable Document Summarizer. In Proceedings of SIGIR1995. A. Leuski, C.-Y. Lin, L. Zhou, U. Germann, F. J. Och, E. Hovy. 2003. Cross-lingual C*ST*RD: English access to Hindi information. ACM Transactions on Asian Language Information Processing, 2(3): 245-269. J.-M. Lim, I.-S. Kang, J.-H. Lee. 2004. Multidocument summarization using cross-language texts. In Proceedings of NTCIR-4. C. Y. Lin, E. Hovy. 2000. The Automated Acquisition of Topic Signatures for Text Summarization. In Proceedings of the 17th Conference on Computational Linguistics. C..-Y. Lin and E.. H. Hovy. 2002. From Single to Multi-document Summarization: A Prototype System and its Evaluation. In Proceedings of ACL-02. C.-Y. Lin and E.H. Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics. In Proceedings of HLT-NAACL -03. C.-Y. Lin, L. Zhou, and E. Hovy. 2005. Multilingual summarization evaluation 2005: automatic evaluation report. In Proceedings of MSE (ACL-2005 Workshop). H. P. Luhn. 1969. The Automatic Creation of literature Abstracts. IBM Journal of Research and Development, 2(2). R. Mihalcea, P. Tarau. 2004. TextRank: Bringing Order into Texts. In Proceedings of EMNLP2004. R. Mihalcea and P. Tarau. 2005. A language independent algorithm for single and multiple document summarization. In Proceedings of IJCNLP-05. A. Nenkova and A. Louis. 2008. Can you summarize this? Identifying correlates of input difficulty for generic multi-document summarization. In Proceedings of ACL-08:HLT. A. Nenkova, R. Passonneau, and K. McKeown. 2007. The Pyramid method: incorporating human content selection variation in summarization evaluation. 925 ACM Transactions on Speech and Language Processing (TSLP), 4(2). C. Orasan, and O. A. Chiorean. 2008. Evaluation of a Crosslingual Romanian-English Multi-document Summariser. In Proceedings of 6th Language Resources and Evaluation Conference (LREC2008). P. Pingali, J. Jagarlamudi and V. Varma. 2007. Experiments in cross language query focused multidocument summarization. In Workshop on Cross Lingual Information Access Addressing the Information Need of Multilingual Societies in IJCAI2007. C. Quirk. 2004. Training a sentence-level machine translation confidence measure. In Proceedings of LREC2004. D. R. Radev, H. Y. Jing, M. Stys and D. Tam. 2004. Centroid-based summarization of multiple documents. Information Processing and Management, 40: 919-938. A. Siddharthan and K. McKeown. 2005. Improving multilingual summarization: using redundancy in the input to correct MT errors. In Proceedings of HLT/EMNLP-2005. L. Specia, Z. Wang, M. Turchi, J. Shawe-Taylor, C. Saunders. 2009. Improving the Confidence of Machine Translation Quality Estimates. In MT Summit 2009 (Machine Translation Summit XII). V. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer. X. Wan, H. Li and J. Xiao. 2010. EUSUM: extracting easy-to-understand English summaries for nonnative readers. In Proceedings of SIGIR2010. X. Wan, J. Yang and J. Xiao. 2006. Using crossdocument random walks for topic-focused multidocumetn summarization. In Proceedings of WI2006. X. Wan and J. Yang. 2008. Multi-document summarization using cluster-based link analysis. In Proceedings of SIGIR-08. X. Wan, J. Yang and J. Xiao. 2007. Towards an Iterative Reinforcement Approach for Simultaneous Document Summarization and Keyword Extraction. In Proceedings of ACL2007. K.-F. Wong, M. Wu and W. Li. 2008. Extractive summarization using supervised and semi-supervised learning. In Proceedings of COLING-08. H. Y. Zha. 2002. Generic Summarization and Keyphrase Extraction Using Mutual Reinforcement Principle and Sentence Clustering. In Proceedings of SIGIR2002. 926
2010
94
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 927–936, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics A new Approach to Improving Multilingual Summarization using a Genetic Algorithm Marina Litvak Ben-Gurion University of the Negev Beer Sheva, Israel [email protected] Mark Last Ben-Gurion University of the Negev Beer Sheva, Israel [email protected] Menahem Friedman Ben-Gurion University of the Negev Beer Sheva, Israel [email protected] Abstract Automated summarization methods can be defined as “language-independent,” if they are not based on any languagespecific knowledge. Such methods can be used for multilingual summarization defined by Mani (2001) as “processing several languages, with summary in the same language as input.” In this paper, we introduce MUSE, a languageindependent approach for extractive summarization based on the linear optimization of several sentence ranking measures using a genetic algorithm. We tested our methodology on two languages—English and Hebrew—and evaluated its performance with ROUGE-1 Recall vs. stateof-the-art extractive summarization approaches. Our results show that MUSE performs better than the best known multilingual approach (TextRank1) in both languages. Moreover, our experimental results on a bilingual (English and Hebrew) document collection suggest that MUSE does not need to be retrained on each language and the same model can be used across at least two different languages. 1 Introduction Document summaries should use a minimum number of words to express a document’s main ideas. As such, high quality summaries can significantly reduce the information overload many professionals in a variety of fields must contend 1We evaluated several summarizers—SUMMA, MEAD, Microsoft Word Autosummarize and TextRank—on the DUC 2002 corpus. Our results show that TextRank performed best. In addition, TextRank can be considered languageindependent as long as it does not perform any morphological analysis. with on a daily basis (Filippova et al., 2009), assist in the automated classification and filtering of documents, and increase search engines precision. Automated summarization methods can use different levels of linguistic analysis: morphological, syntactic, semantic and discourse/pragmatic (Mani, 2001). Although the summary quality is expected to improve when a summarization technique includes language specific knowledge, the inclusion of that knowledge impedes the use of the summarizer on multiple languages. Only systems that perform equally well on different languages without language-specific knowledge (including linguistic analysis) can be considered language-independent summarizers. The publication of information on the Internet in an ever-increasing variety of languages 2 dictates the importance of developing multilingual summarization approaches. There is a particular need for language-independent statistical techniques that can be readily applied to text in any language without depending on language-specific linguistic tools. In the absence of such techniques, the only alternative to language-independent summarization would be the labor-intensive translation of the entire document into a common language. Here we introduce MUSE (MUltilingual Sentence Extractor), a new approach to multilingual single-document extractive summarization where summarization is considered as an optimization or a search problem. We use a Genetic Algorithm (GA) to find an optimal weighted linear combination of 31 statistical sentence scoring methods that are all language-independent and are based on either a vector or a graph representation of a document, where both representations are based on a 2 Gulli and Signorini (2005) used Web searches in 75 different languages to estimate the size of the Web as of the end of January 2005. 927 word segmentation. We have evaluated our approach on two monolingual corpora of English and Hebrew documents and, additionally, on one bilingual corpora comprising English and Hebrew documents. Our evaluation experiments sought to - Compare the GA-based approach for singledocument extractive summarization (MUSE) to the best known sentence scoring methods. - Determine whether the same weighting model is applicable across two different languages. This paper is organized as follows. The next section describes the related work in statistical extractive summarization. Section 3 introduces MUSE, the GA-based approach to multilingual single-document extractive summarization. Section 4 presents our experimental results on monolingual and bilingual corpora. Our conclusions and suggestions for future work comprise the final section. 2 Related Work Extractive summarization is aimed at the selection of a subset of the most relevant fragments from a source text into the summary. The fragments can be paragraphs (Salton et al., 1997), sentences (Luhn, 1958), keyphrases (Turney, 2000) or keywords (Litvak and Last, 2008). Statistical methods for calculating the relevance score of each fragment can be categorized into several classes: cue-based (Edmundson, 1969), keyword- or frequency-based (Luhn, 1958; Edmundson, 1969; Neto et al., 2000; Steinberger and Jezek, 2004; Kallel et al., 2004; Vanderwende et al., 2007), title-based (Edmundson, 1969; Teufel and Moens, 1997), position-based (Baxendale, 1958; Edmundson, 1969; Lin and Hovy, 1997; Satoshi et al., 2001) and length-based (Satoshi et al., 2001). Considered the first work on sentence scoring for automated text summarization, Luhn (1958) based the significance factor of a sentence on the frequency and the relative positions of significant words within a sentence. Edmundson (1969) tested different linear combinations of four sentence ranking scoring methods—cue, key, title and position—to identify that which performed best on a training corpus. Linear combinations of several statistical sentence ranking methods were also applied in the MEAD (Radev et al., 2001) and SUMMA (Saggion et al., 2003) approaches, both of which use the vector space model for text representation and a set of predefined or user-specified weights for a combination of position, frequency, title, and centroid-based (MEAD) features. Goldstein et al. (1999) integrated linguistic and statistical features. In none of these works, however, did the researchers attempt to find the optimal weights for the best linear combination. Information retrieval and machine learning techniques were integrated to determine sentence importance (Kupiec et al., 1995; Wong et al., 2008). Gong and Liu (2001) and Steinberger and Jezek (2004) used singular value decomposition (SVD) to generate extracts. Ishikawa et al. (2002) combined conventional sentence extraction and a trainable classifier based on support vector machines. Some authors reduced the summarization process to an optimization or a search problem. Hassel and Sjobergh (2006) used a standard hillclimbing algorithm to build summaries that maximize the score for the total impact of the summary. A summary consists of first sentences from the document was used as a starting point for the search, and all neighbours (summaries that can be created by simply removing one sentence and adding another) were examined, looking for a better summary. Kallel et al. (2004) and Liu et al. (2006b) used genetic algorithms (GAs), which are known as prominent search and optimization methods (Goldberg, 1989), to find sets of sentences that maximize summary quality metrics, starting from a random selection of sentences as the initial population. In this capacity, however, the high computational complexity of GAs is a disadvantage. To choose the best summary, multiple candidates should be generated and evaluated for each document (or document cluster). Following a different approach, Turney (2000) used a GA to learn an optimized set of parameters for a keyword extractor embedded in the Extractor tool.3 Or˘asan et al. (2000) enhanced the preference-based anaphora resolution algorithms by using a GA to find an optimal set of values for the outcomes of 14 indicators and apply the optimal combination of values from data on one text to a different text. With such approach, training may be the only time-consuming phase in the operation. 3http://www.extractor.com/ 928 Today, graph-based text representations are becoming increasingly popular, due to their ability to enrich the document model with syntactic and semantic relations. Salton et al. (1997) were among the first to make an attempt at using graphbased ranking methods in single document extractive summarization, generating similarity links between document paragraphs and using degree scores in order to extract the important paragraphs from the text. Erkan and Radev (2004) and Mihalcea (2005) introduced algorithms for unsupervised extractive summarization that rely on the application of iterative graph-based ranking algorithms, such as PageRank (Brin and Page, 1998) and HITS (Kleinberg, 1999). Their methods represent a document as a graph of sentences interconnected by similarity relations. Various similarity functions can be applied: cosine similarity as in (Erkan and Radev, 2004), simple overlap as in (Mihalcea, 2005), or other functions. Edges representing the similarity relations can be weighted (Mihalcea, 2005) or unweighted (Erkan and Radev, 2004): two sentences are connected if their similarity is above some predefined threshold value. 3 MUSE – MUltilingual Sentence Extractor In this paper we propose a learning approach to language-independent extractive summarization where the best set of weights for a linear combination of sentence scoring methods is found by a genetic algorithm trained on a collection of document summaries. The weighting vector thus obtained is used for sentence scoring in future summarizations. Since most sentence scoring methods have a linear computational complexity, only the training phase of our approach is time-consuming. 3.1 Sentence scoring methods Our work is aimed at identifying the best linear combination of the 31 sentence scoring methods listed in Table 1. Each method description includes a reference to the original work where the method was proposed for extractive summarization. Methods proposed in this paper are denoted by new. Formulas incorporate the following notation: a sentence is denoted by S, a text document by D, the total number of words in S by N, the total number of sentences in D by n, the sequential number of S in D by i, and the in-document term frequency of the term t by tf(t). In the LUHN method, Wi and Ni are the number of keywords and the total number of words in the ith cluster, respectively, such that clusters are portions of a sentence bracketed by keywords, i.e., frequent, noncommon words.4 Figure 1 demonstrates the taxonomy of the methods listed in Table 1. Methods that require pre-defined threshold values are marked with a cross and listed in Table 2 together with the average threshold values obtained after method evaluation on English and Hebrew corpora. Each method was evaluated on both corpora, with different threshold t ∈[0, 1] (only numbers with one decimal digit were considered). Threshold values resulted in the best ROUGE-1 scores, were selected. A threshold of 1 means that all terms are considered, while a value of 0 means that only terms with the highest rank (tf, degree, or pagerank) are considered. The methods are divided into three main categories—structure-, vector-, and graph-based—according to the text representation model, and each category is divided into sub-categories. Section 3.3 describes our application of a GA to the summarization task. Table 2: Selected thresholds for threshold-based scoring methods Method Threshold LUHN 0.9 LUHN DEG 0.9 LUHN PR 0.0 KEY [0.8, 1.0] KEY DEG [0.8, 1.0] KEY PR [0.1, 1.0] COV 0.9 COV DEG [0.7, 0.9] COV PR 0.1 3.2 Text representation models The vector-based scoring methods listed in Table 1 use tf or tf-idf term weights to evaluate sentence importance. In contrast, representation used by the graph-based methods (except for TextRank) is based on the word-based graph representation models described in (Schenker et al., 2004). Schenker et al. (2005) showed that such graph representations can outperform the vector space model on several document categorization tasks. In the graph representation used by us in this work 4Luhn’s experiments suggest an optimal limit of 4 or 5 non-significant words between keywords. 929 Table 1: Sentence scoring metrics Name Description Source POS F Closeness to the beginning of the document: 1 i (Edmundson, 1969) POS L Closeness to the end of the document: i (Baxendale, 1958) POS B Closeness to the borders of the document: max( 1 i , 1 n−i+1) (Lin and Hovy, 1997) LEN W Number of words in the sentence (Satoshi et al., 2001) LEN CH Number of characters in the sentence5 LUHN maxi∈{clusters(S)}{CSi}, CSi = W 2 i Ni (Luhn, 1958) KEY Sum of the keywords frequencies: P t∈{Keywords(S)} tf(t) (Edmundson, 1969) COV Ratio of keywords number (Coverage): |Keywords(S)| |Keywords(D)| (Liu et al., 2006a) TF Average term frequency for all sentence words: P t∈S tf(t) N (Vanderwende et al., 2007) TFISF P t∈S tf(t) × isf(t), isf(t) = 1 −log(n(t)) log(n) , (Neto et al., 2000) n(t) is the number of sentences containing t SVD Length of a sentence vector in Σ2 · V T after computing Singular Value (Steinberger and Jezek, 2004) Decomposition of a term by sentences matrix A = UΣV T TITLE O Overlap similarity6 to the title: sim(S, T) = |S∩T | min{|S|,|T |} (Edmundson, 1969) TITLE J Jaccard similarity to the title: sim(S, T) = |S∩T | |S∪T | TITLE C Cosine similarity to the title: sim(⃗S, ⃗T) = cos(⃗S, ⃗T) = ⃗S•⃗T |⃗S|•|⃗T| D COV O Overlap similarity to the document complement new sim(S, D −S) = |S∩T | min{|S|,|D−S|} D COV J Jaccard similarity to the document complement sim(S, D −S) = |S∩T | |S∪D−S| D COV C Cosine similarity to the document complement cos(⃗S, ⃗ D −S) = ⃗S• ⃗ D−S |⃗S|•| ⃗ D−S| LUHN DEG Graph-based extensions of LUHN, KEY and COV measures respectively. KEY DEG Node degree is used instead of a word frequency: words are considered COV DEG significant if they are represented by nodes having a degree higher than a predefined threshold DEG Average degree for all sentence nodes: P i∈{words(S)} Degi N GRASE Frequent sentences from bushy paths are selected. Each sentence in the bushy path gets a domination score that is the number of edges with its label in the path normalized by the sentence length. The relevance score for a sentence is calculated as a sum of its domination scores over all paths. LUHN PR Graph-based extensions of LUHN, KEY and COV measures respectively. KEY PR Node PageRank score is used instead of a word frequency: words are considered COV PR significant if they are represented by nodes having a PageRank score higher than a predefined threshold PR Average PageRank for all sentence nodes: P t∈S P R(t) N TITLE E O Overlap-based edge matching between title and sentence graphs TITLE E J Jaccard-based edge matching between title and sentence graphs D COV E O Overlap-based edge matching between sentence and a document complement graphs D COV E J Jaccard-based edge matching between sentence and a document complement graphs ML TR Multilingual version of TextRank without morphological analysis: (Mihalcea, 2005) Sentence score equals to PageRank (Brin and Page, 1998) rank of its node: WS(Vi) = (1 −d) + d ∗P Vj∈In(Vi) wji P Vk∈Out(Vj ) wjk WS(Vj) nodes represent unique terms (distinct words) and edges represent order-relationships between two terms. There is a directed edge from A to B if an A term immediately precedes the B term in any sentence of the document. We label each edge with the IDs of sentences that contain both words in the specified order. 3.3 Optimization—learning the best linear combination We found the best linear combination of the methods listed in Table 1 using a Genetic Algorithm (GA). GAs are categorized as global search heuristics. Figure 2 shows a simplified GA flowchart. A typical genetic algorithm requires (1) a genetic representation of the solution domain, and (2) a fitness function to evaluate the solution domain. We represent the solution as a vector of weights 930 Language-independent sentence scoringmethods Structurebased Vectorbased Graphbased Position Length Frequency Similarity Degree Similarity Pagerank Title Document POS_F POS_L POS_B LEN_W LEN_CH LUHN KEY COV TF TFIISF SVD TITLE_O TITLE_J TITLE_C D_COV_O* D_COV_J* D_COV_C* LUHN_DEG* KEY_DEG* COV_DEG* DEG* GRASE* LUHN_PR* KEY_PR* COV_PR* PR* ML_TR Title Document TITLE_E_O* TITLE_E_J* D_COV_E_O* D_COV_E_J* Figure 1: Taxonomy of language-independent sentence scoring methods Selection Mating Crossover Mutation Terminate? Best gene yes no Initialization Mating Crossover Mutation Reproduction Figure 2: Simplified flowchart of a Genetic Algorithm for a linear combination of sentence scoring methods—real-valued numbers in the unlimited range normalized in such a way that they sum up to 1. The vector size is fixed and it equals to the number of methods used in the combination. Defined over the genetic representation, the fitness function measures the quality of the represented solution. We use ROUGE-1 Recall (Lin and Hovy, 2003) as a fitness function for measuring summarization quality, which is maximized during the optimization procedure. Below we describe each phase of the optimization procedure in detail. Initialization GA will explore only a small part of the search space, if the population is too small, whereas it slows down if there are too many solutions. We start from N = 500 randomly generated genes/solutions as an initial population, that empirically was proven as a good choice. Each gene is represented by a weighting vector vi = w1, . . . , wD having a fixed number of D ≤31 elements. All elements are generated from a standard normal distribution, with µ = 0 and σ2 = 1, and normalized to sum up to 1. For this solution representation, a negative weight, if it occurs, can be considered as a “penalty” for the associated metric. Selection During each successive generation, a proportion of the existing population is selected to breed a new generation. We use a truncation selection method that rates the fitness of each solution and selects the best fifth (100 out of 500) of the individual solutions, i.e., getting the maximal ROUGE value. In such manner, we discard “bad” solutions and prevent them from reproduction. Also, we use elitism—method that prevents losing the best found solution in the population by copying it to the next generation. Reproduction In this stage, new genes/solutions are introduced into the population, i.e., new points in the search space are explored. These new solutions are generated from those selected through the following genetic operators: mating, crossover, and mutation. In mating, a pair of “parent” solutions is randomly selected, and a new solution is created using crossover and mutation, that are the most important part of a genetic algorithm. The GA performance is influenced mainly by these two operators. New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size N is generated. Crossover is performed under the assumption 931 that new solutions can be improved by re-using the good parts of old solutions. However it is good to keep some part of population from one generation to the next. Our crossover operator includes a probability (80%) that a new and different offspring solution will be generated by calculating the weighted average of two “parent” vectors according to (Vignaux and Michalewicz, 1991). Formally, a new vector v will be created from two vectors v1 and v2 according to the formula v = λ ∗v1 + (1 −λ) ∗v2 (we set λ = 0.5). There is a probability of 20% that the offspring will be a duplicate of one of its parents. Mutation in GAs functions both to preserve the existing diversity and to introduce new variation. It is aimed at preventing GA from falling into local extreme, but it should not be applied too often, because then GA will in fact change to random search. Our mutation operator includes a probability (3%) that an arbitrary weight in a vector will be changed by a uniformly randomized factor in the range of [−0.3, 0.3] from its original value. Termination The generational process is repeated until a termination condition—a plateau of solution/combination fitness such that successive iterations no longer produce better results—has been reached. The minimal improvement in our experiments was set to ǫ = 1.0E −21. 4 Experiments 4.1 Overview The MUSE summarization approach was evaluated using a comparative experiment on two monolingual corpora of English and Hebrew texts and on a bilingual corpus of texts in both languages. We intentionally chose English and Hebrew, which belong to distinct language families (Indo-European and Semitic languages, respectfully), to ensure that the results of our evaluation would be widely generalizable. The specific goals of the experiment are to: - Evaluate the optimal sentence scoring models induced from the corpora of summarized documents in two different languages. - Compare the performance of the GA-based multilingual summarization method proposed in this work to the state-of-the-art approaches. - Compare method performance on both languages. - Determine whether the same sentence scoring model can be efficiently used for extractive summarization across two different languages. 4.2 Text preprocessing Crucial to extractive summarization, proper sentence segmentation contributes to the quality of summarization results. For English sentences, we used the sentence splitter provided with the MEAD summarizer (Radev et al., 2001). A simple splitter that can split the text at periods, exclamation points, or question marks was used for the Hebrew text.7 4.3 Experiment design The English text material we used in our experiments comprised the corpus of summarized documents available to the single document summarization task at the Document Understanding Conference, 2002 (DUC, 2002). This benchmark dataset contains 533 news articles, each accompanied by two to three human-generated abstracts of approximately 100 words each. For the Hebrew language, however, to the best of our knowledge, no summarization benchmarks exist. To generate a corpus of summarized Hebrew texts, therefore, we set up an experiment where human assessors were given 50 news articles of 250 to 830 words each from the Website of the Haaretz newspaper.8 All assessors were provided with the Tool Assisting Human Assessors (TAHA) software tool9 that enables sentences to be easily selected and stored for later inclusion in the document extract. In total, 70 undergraduate students from the Department of Information Systems Engineering, Ben Gurion University of the Negev participated in the experiment. Each student participant was randomly assigned ten different documents and instructed to (1) spend at least five minutes on each document, (2) ignore dialogs and quotations, (3) read the whole document before beginning sentence extraction, (4) ignore redundant, repetitive, and overly detailed information, and (5) remain within the minimal and maximal summary length constraints (95 and 100 words, respectively). Summaries were assessed for quality by comparing each student’s summary to those of all the other students using the ROUGE evalua7Although the same set of splitting rules may be used for many different languages, separate splitters were used for English and Hebrew because the MEAD splitter tool is restricted to European languages. 8http://www.haaretz.co.il 9TAHA can be provided upon request 932 tion toolkit adapted to Hebrew10 and the ROUGE1 metric (Lin and Hovy, 2003). We filtered all the summaries produced by assessors that received average ROUGE score below 0.5, i. e. agreed with the rest of assessors in less than 50% of cases. Finally, our corpus of summarized Hebrew texts was compiled from the summaries of about 60% of the most consistent assessors, with an average of seven extracts per single document11. The ROUGE scores of the selected assessors are distributed between 50 and 57 percents. The third, bilingual, experimental corpus was assembled from documents in both languages. 4.4 Experimental Results We evaluated English and Hebrew summaries using ROUGE-1, 2, 3, 4, L, SU and W metrics, described in (2004). In agreement with Lin’s (2004) conclusion, our results for the different metrics were not statistically distinguishable. However, ROUGE-1 showed the largest variation across the methods. In the following comparisons, all results are presented in terms of the ROUGE-1 Recall metric. We estimated the ROUGE metric using 10-fold cross validation. The results of training and testing comprise the average ROUGE values obtained for English, Hebrew, and bilingual corpora (Table 3). Since we experimented with a different number of English and Hebrew documents (533 and 50, respectively), we have created 10 balanced bilingual corpora, each with the same number of English and Hebrew documents, by combining approximately 50 randomly selected English documents with all 50 Hebrew documents. Each corpus was then subjected to 10-fold cross validation, and the average results for training and testing were calculated. We compared our approach (1) with a multilingual version of TextRank (denoted by ML TR) (Mihalcea, 2005) as the best known multilingual summarizer, (2) with Microsoft Word’s Autosummarize function12 (denoted by MS SUM) as a widely used commercial summa10The regular expressions specifying “word” were adapted to Hebrew alphabet. The same toolkit was used for summaries evaluation on Hebrew corpus. 11Dataset is available at http://www.cs.bgu.ac. il/˜litvakm/research/ 12We reported the following bug to Microsoft: Microsoft Word’s Document.Autosummarize Method returns different results from the output of the AutoSummarize Dialog Box. In our experiments, the Method results were used. rizer, and (3) with the best single scoring method in each corpus. As a baseline, we compiled summaries created from the initial sentences (denoted by POS F). Table 4 shows the comparative results (ROUGE mean values) for English, Hebrew, and bilingual corpora, with the best summarizers on top. Pairwise comparisons between summarizers indicated that all methods (except POS F and ML TR in the English and bilingual corpora and D COV J and POS F in the Hebrew corpus) were significantly different at the 95% confidence level. MUSE performed significantly better than TextRank in all three corpora and better than the best single methods COV DEG in English and D COV J in Hebrew corpora respectively. Two sets of features—the full set of 31 sentence scoring metrics and the 10 best bilingual metrics determined in our previous work13 using a clustering analysis of the methods results on both corpora—were tested on the bilingual corpus. The experimental results show that the optimized combination of the 10 best metrics is not significantly distinguishable from the best single metric in the multilingual corpus – COV DEG. The difference between the combination of all 31 metrics and COV DEG is significant only with a onetailed p-value of 0.0798 (considered not very significant). Both combinations significantly outperformed all the other summarizers that were compared. Table 4 contains the results of MUSEtrained weights for all 31 metrics. Our experiments showed that the removal of highly-correlated metrics (the metric with the lower ROUGE value out of each pair of highlycorrelated metrics) from the linear combination slightly improved summarization quality, but the improvement was not statistically significant. Discarding bottom ranked features (up to 50%), also, did not affect the results significantly. Table 5 shows the best vectors generated from training MUSE on all the documents in the English, Hebrew, and multilingual (one of 10 balanced) corpora and their ROUGE training scores and number of GA iterations. While the optimal values of the weights are expected to be nonnegative, among the actual results are some negative values. Although there is no simple explanation for this outcome, it may be related to a well-known phenomenon from Numerical Analysis called over-relaxation (Friedman 13submitted to publication 933 and Kandel, 1994). For example, Laplace equation φxx + φyy = 0 is iteratively solved over a grid of points as follows: At each grid point let φ(n), φ (n) denote the nth iteration as calculated from the differential equation and its modified final value, respectively. The final value is chosen as ωφ(n) + (1 −ω)φ (n−1). While the sum of the two weights is obviously 1, the optimal value of ω, which minimizes the number of iterations needed for convergence, usually satisfies 1 < ω < 2 (i.e., the second weight 1 −ω is negative) and approaches 2 the finer the grid gets. Though somewhat unexpected, this surprising result can be rigorously proved (Varga, 1962). Table 3: Results of 10-fold cross validation ENG HEB MULT Train 0.4483 0.5993 0.5205 Test 0.4461 0.5936 0.5027 Table 4: Summarization performance. Mean ROUGE-1 Metric ENG HEB MULT MUSE 0.4461 0.5921 0.4633 COV DEG 0.4363 0.5679 0.4588 D COV J 0.4251 0.5748 0.4512 POS F 0.4190 0.5678 0.4440 ML TR 0.4138 0.5190 0.4288 MS SUM 0.3097 0.4114 0.3184 Assuming efficient implementation, most metrics have a linear computational complexity relative to the total number of words in a document - O(n). As a result, MUSE total computation time, given a trained model, is also linear (at factor of the number of metrics in a combination). The training time is proportional to the number of GA iterations multiplied by the number of individuals in a population times the fitness evaluation (ROUGE) time. On average, in our experiments the GA performed 5 −6 iterations—selection and reproduction—before reaching convergence. 5 Conclusions and future work In this paper we introduced MUSE, a new, GAbased approach to multilingual extractive summarization. We evaluated the proposed methodology on two languages from different language families: English and Hebrew. The experimental results showed that MUSE significantly outperformed TextRank, the best known languageTable 5: Induced weights for the best linear combination of scoring metrics Metric ENG HEB MULT COV DEG 8.490 0.171 0.697 KEY DEG 15.774 0.218 -2.108 KEY 4.734 0.471 0.346 COV PR -4.349 0.241 -0.462 COV 10.016 -0.112 0.865 D COV C -9.499 -0.163 1.112 D COV J 11.337 0.710 2.814 KEY PR 0.757 0.029 -0.326 LUHN DEG 6.970 0.211 0.113 POS F 6.875 0.490 0.255 LEN CH 1.333 -0.002 0.214 LUHN -2.253 -0.060 0.411 LUHN PR 1.878 -0.273 -2.335 LEN W -13.204 -0.006 1.596 ML TR 8.493 0.340 1.549 TITLE E J -5.551 -0.060 -1.210 TITLE E O -21.833 0.074 -1.537 D COV E J 1.629 0.302 0.196 D COV O 5.531 -0.475 0.431 TFISF -0.333 -0.503 0.232 DEG 3.584 -0.218 0.059 D COV E O 8.557 -0.130 -1.071 PR 5.891 -0.639 1.793 TITLE J -7.551 0.071 1.445 TF 0.810 0.202 -0.650 TITLE O -11.996 0.179 -0.634 SVD -0.557 0.137 0.384 TITLE C 5.536 -0.029 0.933 POS B -5.350 0.347 1.074 GRASE -2.197 -0.116 -1.655 POS L -22.521 -0.408 -3.531 Score 0.4549 0.6019 0.526 Iterations 10 6 7 independent approach, in both Hebrew and English using either monolingual or bilingual corpora. Moreover, our results suggest that the same weighting model is applicable across multiple languages. In future work, one may: - Evaluate MUSE on additional languages and language families. - Incorporate threshold values for threshold-based methods (Table 2) into the GA-based optimization procedure. - Improve performance of similarity-based metrics in the multilingual domain. - Apply additional optimization techniques like Evolution Strategy (Beyer and Schwefel, 2002), which is known to perform well in a real-valued search space. - Extend the search for the best summary to the problem of multi-object optimization, combining several summary quality metrics. 934 Acknowledgments We are grateful to Michael Elhadad and Galina Volk from Ben-Gurion University for providing the ROUGE toolkit adapted to the Hebrew alphabet, and to Slava Kisilevich from the University of Konstanz for the technical support in evaluation experiments. References P. B. Baxendale. 1958. Machine-made index for technical literaturean experiment. IBM Journal of Research and Development, 2(4):354–361. H.-G. Beyer and H.-P. Schwefel. 2002. Evolution strategies: A comprehensive introduction. Journal Natural Computing, 1(1):3–52. S. Brin and L. Page. 1998. The anatomy of a largescale hypertextual web search engine. Computer networks and ISDN systems, 30(1-7):107–117. DUC. 2002. Document understanding conference. http://duc.nist.gov. H. P. Edmundson. 1969. New methods in automatic extracting. ACM, 16(2). G. Erkan and D. R. Radev. 2004. Lexrank: Graphbased lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457–479. K. Filippova, M. Surdeanu, M. Ciaramita, and H. Zaragoza. 2009. Company-oriented extractive summarization of financial news. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pages 246–254. M. Friedman and A. Kandel. 1994. Fundamentals of Computer Numerical Analysis. CRC Press. D. E. Goldberg. 1989. Genetic algorithms in search, optimization and machine learning. AddisonWesley. J. Goldstein, M. Kantrowitz, V. Mittal, and J. Carbonell. 1999. Summarizing text documents: Sentence selection and evaluation metrics. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 121–128. Y. Gong and X. Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. In Proceedings of the 24th ACM SIGIR conference on Research and development in information retrieval, pages 19–25. A. Gulli and A. Signorini. 2005. The indexable web is more than 11.5 billion pages. http://www.cs. uiowa.edu/˜asignori/web-size/. M. Hassel and J. Sjobergh. 2006. Towards holistic summarization: Selecting summaries, not sentences. In Proceedings of Language Resources and Evaluation. K. Ishikawa, S-I. ANDO, S-I. Doi, and A. Okumura. 2002. Trainable automatic text summarization using segmentation of sentence. In Proceedings of 2002 NTCIR 3 TSC workshop. F. J. Kallel, M. Jaoua, L. B. Hadrich, and A. Ben Hamadou. 2004. Summarization at laris laboratory. In Proceedings of the Document Understanding Conference. J.M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. Journal of the ACM (JACM), 46(5):604–632. J. Kupiec, J. Pedersen, and F Chen. 1995. A trainable document summarizer. In Proceedings of the 18th annual international ACM SIGIR conference, pages 68–73. C.Y. Lin and E. Hovy. 1997. Identifying topics by position. In Proceedings of the fifth conference on Applied natural language processing, pages 283–290. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram cooccurrence statistics. In NAACL ’03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 71–78. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proceedings of the Workshop on Text Summarization Branches Out (WAS 2004), pages 25–26. M. Litvak and M. Last. 2008. Graph-based keyword extraction for single-document summarization. In Proceedings of the Workshop on Multi-source Multilingual Information Extraction and Summarization, pages 17–24. D. Liu, Y. He, D. Ji, and H. Yang. 2006a. Genetic algorithm based multi-document summarization. Lecture Notes in Computer Science, 4099:1140. D. Liu, Y. Wang, C. Liu, and Z. Wang. 2006b. Multiple documents summarization based on genetic algorithm. Lecture Notes in Computer Science, 4223:355. H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Development, 2:159–165. Inderjeet Mani. 2001. Automatic Summarization. Natural Language Processing, John Benjamins Publishing Company. Rada Mihalcea. 2005. Language independent extractive summarization. In AAAI’05: Proceedings of the 20th national conference on Artificial intelligence, pages 1688–1689. 935 J.L. Neto, A.D. Santos, C.A.A. Kaestner, and A.A. Freitas. 2000. Generating text summaries through the relative importance of topics. Lecture Notes in Computer Science, pages 300–309. Constantin Or˘asan, Richard Evans, and Ruslan Mitkov. 2000. Enhancing preference-based anaphora resolution with genetic algorithms. In Dimitris Christodoulakis, editor, Proceedings of the Second International Conference on Natural Language Processing, volume 1835, pages 185 – 195, Patras, Greece, June 2– 4. Springer. Dragomir Radev, Sasha Blair-Goldensohn, and Zhu Zhang. 2001. Experiments in single and multidocument summarization using mead. First Document Understanding Conference. Horacio Saggion, Kalina Bontcheva, and Hamish Cunningham. 2003. Robust generic and query-based summarisation. In EACL ’03: Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics. G. Salton, A. Singhal, M. Mitra, and C. Buckley. 1997. Automatic text structuring and summarization. Information Processing and Management, 33(2):193– 207. C. N. Satoshi, S. Satoshi, M. Murata, K. Uchimoto, M. Utiyama, and H. Isahara. 2001. Sentence extraction system assembling multiple evidence. In Proceedings of 2nd NTCIR Workshop, pages 319– 324. A. Schenker, H. Bunke, M. Last, and A. Kandel. 2004. Classification of web documents using graph matching. International Journal of Pattern Recognition and Artificial Intelligence, 18(3):475–496. A. Schenker, H. Bunke, M. Last, and A. Kandel. 2005. Graph-theoretic techniques for web content mining. J. Steinberger and K. Jezek. 2004. Text summarization and singular value decomposition. Lecture Notes in Computer Science, pages 245–254. S. Teufel and M. Moens. 1997. Sentence extraction as a classification task. In Proceedings of the Workshop on Intelligent Scalable Summarization, ACL/EACL Conference, pages 58–65. Peter D. Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, 2(4):303–336. L. Vanderwende, H. Suzuki, C. Brockett, and A. Nenkova. 2007. Beyond sumbasic: Taskfocused summarization with sentence simplification and lexical expansion. Information processing and management, 43(6):1606–1618. R.S. Varga. 1962. Matrix Iterative Methods. PrenticeHall. G. A. Vignaux and Z. Michalewicz. 1991. A genetic algorithm for the linear transportation problem. IEEE Transactions on Systems, Man and Cybernetics, 21:445–452. K.F. Wong, M. Wu, and W. Li. 2008. Extractive summarization using supervised and semi-supervised learning. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 985–992. 936
2010
95
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 937–947, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Bayesian Synchronous Tree-Substitution Grammar Induction and its Application to Sentence Compression Elif Yamangil and Stuart M. Shieber Harvard University Cambridge, Massachusetts, USA {elif, shieber}@seas.harvard.edu Abstract We describe our experiments with training algorithms for tree-to-tree synchronous tree-substitution grammar (STSG) for monolingual translation tasks such as sentence compression and paraphrasing. These translation tasks are characterized by the relative ability to commit to parallel parse trees and availability of word alignments, yet the unavailability of large-scale data, calling for a Bayesian tree-to-tree formalism. We formalize nonparametric Bayesian STSG with epsilon alignment in full generality, and provide a Gibbs sampling algorithm for posterior inference tailored to the task of extractive sentence compression. We achieve improvements against a number of baselines, including expectation maximization and variational Bayes training, illustrating the merits of nonparametric inference over the space of grammars as opposed to sparse parametric inference with a fixed grammar. 1 Introduction Given an aligned corpus of tree pairs, we might want to learn a mapping between the paired trees. Such induction of tree mappings has application in a variety of natural-language-processing tasks including machine translation, paraphrase, and sentence compression. The induced tree mappings can be expressed by synchronous grammars. Where the tree pairs are isomorphic, synchronous context-free grammars (SCFG) may suffice, but in general, non-isomorphism can make the problem of rule extraction difficult (Galley and McKeown, 2007). More expressive formalisms such as synchronous tree-substitution (Eisner, 2003) or treeadjoining grammars may better capture the pairings. In this work, we explore techniques for inducing synchronous tree-substitution grammars (STSG) using as a testbed application extractive sentence compression. Learning an STSG from aligned trees is tantamount to determining a segmentation of the trees into elementary trees of the grammar along with an alignment of the elementary trees (see Figure 1 for an example of such a segmentation), followed by estimation of the weights for the extracted tree pairs.1 These elementary tree pairs serve as the rules of the extracted grammar. For SCFG, segmentation is trivial — each parent with its immediate children is an elementary tree — but the formalism then restricts us to deriving isomorphic tree pairs. STSG is much more expressive, especially if we allow some elementary trees on the source or target side to be unsynchronized, so that insertions and deletions can be modeled, but the segmentation and alignment problems become nontrivial. Previous approaches to this problem have treated the two steps — grammar extraction and weight estimation — with a variety of methods. One approach is to use word alignments (where these can be reliably estimated, as in our testbed application) to align subtrees and extract rules (Och and Ney, 2004; Galley et al., 2004) but this leaves open the question of finding the right level of generality of the rules — how deep the rules should be and how much lexicalization they should involve — necessitating resorting to heuristics such as minimality of rules, and leading to 1Throughout the paper we will use the word STSG to refer to the tree-to-tree version of the formalism, although the string-to-tree version is also commonly used. 937 large grammars. Once a given set of rules is extracted, weights can be imputed using a discriminative approach to maximize the (joint or conditional) likelihood or the classification margin in the training data (taking or not taking into account the derivational ambiguity). This option leverages a large amount of manual domain knowledge engineering and is not in general amenable to latent variable problems. A simpler alternative to this two step approach is to use a generative model of synchronous derivation and simultaneously segment and weight the elementary tree pairs to maximize the probability of the training data under that model; the simplest exemplar of this approach uses expectation maximization (EM) (Dempster et al., 1977). This approach has two frailties. First, EM search over the space of all possible rules is computationally impractical. Second, even if such a search were practical, the method is degenerate, pushing the probability mass towards larger rules in order to better approximate the empirical distribution of the data (Goldwater et al., 2006; DeNero et al., 2006). Indeed, the optimal grammar would be one in which each tree pair in the training data is its own rule. Therefore, proposals for using EM for this task start with a precomputed subset of rules, and with EM used just to assign weights within this grammar. In summary, previous methods suffer from problems of narrowness of search, having to restrict the space of possible rules, and overfitting in preferring overly specific grammars. We pursue the use of hierarchical probabilistic models incorporating sparse priors to simultaneously solve both the narrowness and overfitting problems. Such models have been used as generative solutions to several other segmentation problems, ranging from word segmentation (Goldwater et al., 2006), to parsing (Cohn et al., 2009; Post and Gildea, 2009) and machine translation (DeNero et al., 2008; Cohn and Blunsom, 2009; Liu and Gildea, 2009). Segmentation is achieved by introducing a prior bias towards grammars that are compact representations of the data, namely by enforcing simplicity and sparsity: preferring simple rules (smaller segments) unless the use of a complex rule is evidenced by the data (through repetition), and thus mitigating the overfitting problem. A Dirichlet process (DP) prior is typically used to achieve this interplay. Interestingly, samplingbased nonparametric inference further allows the possibility of searching over the infinite space of grammars (and, in machine translation, possible word alignments), thus side-stepping the narrowness problem outlined above as well. In this work, we use an extension of the aforementioned models of generative segmentation for STSG induction, and describe an algorithm for posterior inference under this model that is tailored to the task of extractive sentence compression. This task is characterized by the availability of word alignments, providing a clean testbed for investigating the effects of grammar extraction. We achieve substantial improvements against a number of baselines including EM, support vector machine (SVM) based discriminative training, and variational Bayes (VB). By comparing our method to a range of other methods that are subject differentially to the two problems, we can show that both play an important role in performance limitations, and that our method helps address both as well. Our results are thus not only encouraging for grammar estimation using sparse priors but also illustrate the merits of nonparametric inference over the space of grammars as opposed to sparse parametric inference with a fixed grammar. In the following, we define the task of extractive sentence compression and the Bayesian STSG model, and algorithms we used for inference and prediction. We then describe the experiments in extractive sentence compression and present our results in contrast with alternative algorithms. We conclude by giving examples of compression patterns learned by the Bayesian method. 2 Sentence compression Sentence compression is the task of summarizing a sentence while retaining most of the informational content and remaining grammatical (Jing, 2000). In extractive sentence compression, which we focus on in this paper, an order-preserving subset of the words in the sentence are selected to form the summary, that is, we summarize by deleting words (Knight and Marcu, 2002). An example sentence pair, which we use as a running example, is the following: • Like FaceLift, much of ATM’s screen performance depends on the underlying application. • ATM’s screen performance depends on the underlying application. 938 Figure 1: A portion of an STSG derivation of the example sentence and its extractive compression. where the underlined words were deleted. In supervised sentence compression, the goal is to generalize from a parallel training corpus of sentences (source) and their compressions (target) to unseen sentences in a test set to predict their compressions. An unsupervised setup also exists; methods for the unsupervised problem typically rely on language models and linguistic/discourse constraints (Clarke and Lapata, 2006a; Turner and Charniak, 2005). Because these methods rely on dynamic programming to efficiently consider hypotheses over the space of all possible compressions of a sentence, they may be harder to extend to general paraphrasing. 3 The STSG Model Synchronous tree-substitution grammar is a formalism for synchronously generating a pair of non-isomorphic source and target trees (Eisner, 2003). Every grammar rule is a pair of elementary trees aligned at the leaf level at their frontier nodes, which we will denote using the form cs/ct →es/et, γ (indices s for source, t for target) where cs, ct are root nonterminals of the elementary trees es, et respectively and γ is a 1-to-1 correspondence between the frontier nodes in es and et. For example, the rule S / S →(S (PP (IN Like) NP[ϵ]) NP[1] VP[2]) / (S NP[1] VP[2]) can be used to delete a subtree rooted at PP. We use square bracketed indices to represent the alignment γ of frontier nodes — NP[1] aligns with NP[1], VP[2] aligns with VP[2], NP[ϵ] aligns with the special symbol ϵ denoting a deletion from the source tree. Symmetrically ϵ-aligned target nodes are used to represent insertions into the target tree. Similarly, the rule NP / ϵ →(NP (NN FaceLift)) / ϵ can be used to continue deriving the deleted subtree. See Figure 1 for an example of how an STSG with these rules would operate in synchronously generating our example sentence pair. STSG is a convenient choice of formalism for a number of reasons. First, it eliminates the isomorphism and strong independence assumptions of SCFGs. Second, the ability to have rules deeper than one level provides a principled way of modeling lexicalization, whose importance has been emphasized (Galley and McKeown, 2007; Yamangil and Nelken, 2008). Third, we may have our STSG operate on trees instead of sentences, which allows for efficient parsing algorithms, as well as providing syntactic analyses for our predictions, which is desirable for automatic evaluation purposes. A straightforward extension of the popular EM algorithm for probabilistic context free grammars (PCFG), the inside-outside algorithm (Lari and Young, 1990), can be used to estimate the rule weights of a given unweighted STSG based on a corpus of parallel parse trees t = t1, . . . , tN where tn = tn,s/tn,t for n = 1, . . . , N. Similarly, an 939 Figure 2: Gibbs sampling updates. We illustrate a sampler move to align/unalign a source node with a target node (top row in blue), and split/merge a deletion rule via aligning with ϵ (bottom row in red). extension of the Viterbi algorithm is available for finding the maximum probability derivation, useful for predicting the target analysis tN+1,t for a test instance tN+1,s. (Eisner, 2003) However, as noted earlier, EM is subject to the narrowness and overfitting problems. 3.1 The Bayesian generative process Both of these issues can be addressed by taking a nonparametric Bayesian approach, namely, assuming that the elementary tree pairs are sampled from an independent collection of Dirichlet process (DP) priors. We describe such a process for sampling a corpus of tree pairs t. For all pairs of root labels c = cs/ct that we consider, where up to one of cs or ct can be ϵ (e.g., S / S, NP / ϵ), we sample a sparse discrete distribution Gc over infinitely many elementary tree pairs e = es/et sharing the common root c from a DP Gc ∼ DP(αc, P0(· | c)) (1) where the DP has the concentration parameter αc controlling the sparsity of Gc, and the base distribution P0(· | c) is a distribution over novel elementary tree pairs that we describe more fully shortly. We then sample a sequence of elementary tree pairs to serve as a derivation for each observed derived tree pair. For each n = 1, . . . , N, we sample elementary tree pairs en = en,1, . . . , en,dn in a derivation sequence (where dn is the number of rules used in the derivation), consulting Gc whenever an elementary tree pair with root c is to be sampled. e iid∼ Gc, for all e whose root label is c Given the derivation sequence en, a tree pair tn is determined, that is, p(tn | en) =  1 en,1, . . . , en,dn derives tn 0 otherwise. (2) The hyperparameters αc can be incorporated into the generative model as random variables; however, we opt to fix these at various constants to investigate different levels of sparsity. For the base distribution P0(· | c) there are a variety of choices; we used the following simple scenario. (We take c = cs/ct.) Synchronous rules For the case where neither cs nor ct are the special symbol ϵ, the base distribution first generates es and et independently, and then samples an alignment between the frontier nodes. Given a nonterminal, an elementary tree is generated by first making a decision to expand the nonterminal (with probability βc) or to leave it as a frontier node (1 −βc). If the decision to expand was made, we sample an appropriate rule from a PCFG which we estimate ahead 940 of time from the training corpus. We expand the nonterminal using this rule, and then repeat the same procedure for every child generated that is a nonterminal until there are no generated nonterminal children left. This is done independently for both es and et. Finally, we sample an alignment between the frontier nodes uniformly at random out of all possible alingments. Deletion/insertion rules If ct = ϵ, that is, we have a deletion rule, we need to generate e = es/ϵ. (The insertion rule case is symmetric.) The base distribution generates es using the same process described for synchronous rules above. Then with probability 1 we align all frontier nodes in es with ϵ. In essence, this process generates TSG rules, rather than STSG rules, which are used to cover deleted (or inserted) subtrees. This simple base distribution does nothing to enforce an alignment between the internal nodes of es and et. One may come up with more sophisticated base distributions. However the main point of the base distribution is to encode a controllable preference towards simpler rules; we therefore make the simplest possible assumption. 3.2 Posterior inference via Gibbs sampling Assuming fixed hyperparameters α = {αc} and β = {βc}, our inference problem is to find the posterior distribution of the derivation sequences e = e1, . . . , eN given the observations t = t1, . . . , tN. Applying Bayes’ rule, we have p(e | t) ∝ p(t | e)p(e) (3) where p(t | e) is a 0/1 distribution (2) which does not depend on Gc, and p(e) can be obtained by collapsing Gc for all c. Consider repeatedly generating elementary tree pairs e1, . . . , ei, all with the same root c, iid from Gc. Integrating over Gc, the ei become dependent. The conditional prior of the i-th elementary tree pair given previously generated ones e<i = e1, . . . , ei−1 is given by p(ei | e<i) = nei + αcP0(ei | c) i −1 + αc (4) where nei denotes the number of times ei occurs in e<i. Since the collapsed model is exchangeable in the ei, this formula forms the backbone of the inference procedure that we describe next. It also makes clear DP’s inductive bias to reuse elementary tree pairs. We use Gibbs sampling (Geman and Geman, 1984), a Markov chain Monte Carlo (MCMC) method, to sample from the posterior (3). A derivation e of the corpus t is completely specified by an alignment between the source nodes and the corresponding target nodes (as well as ϵ on either side), which we take to be the state of the sampler. We start at a random derivation of the corpus, and at every iteration resample a derivation by amending the current one through local changes made at the node level, in the style of Goldwater et al. (2006). Our sampling updates are extensions of those used by Cohn and Blunsom (2009) in MT, but are tailored to our task of extractive sentence compression. In our task, no target node can align with ϵ (which would indicate a subtree insertion), and barring unary branches no source node i can align with two different target nodes j and j′ at the same time (indicating a tree expansion). Rather, the configurations of interest are those in which only source nodes i can align with ϵ, and two source nodes i and i′ can align with the same target node j. Thus, the alignments of interest are not arbitrary relations, but (partial) functions from nodes in es to nodes in et or ϵ. We therefore sample in the direction from source to target. In particular, we visit every tree pair and each of its source nodes i, and update its alignment by selecting between and within two choices: (a) unaligned, (b) aligned with some target node j or ϵ. The number of possibilities j in (b) is significantly limited, firstly by the word alignment (for instance, a source node dominating a deleted subspan cannot be aligned with a target node), and secondly by the current alignment of other nearby aligned source nodes. (See Cohn and Blunsom (2009) for details of matching spans under tree constraints.)2 2One reviewer was concerned that since we explicitly disallow insertion rules in our sampling procedure, our model that generates such rules wastes probability mass and is therefore “deficient”. However, we regard sampling as a separate step from the data generation process, in which we can formulate more effective algorithms by using our domain knowledge that our data set was created by annotators who were instructed to delete words only. Also, disallowing insertion rules in the base distribution unnecessarily complicates the definition of the model, whereas it is straightforward to define the joint distribution of all (potentially useful) rules and then use domain knowledge to constrain the support of that distribution during inference, as we do here. In fact, it is pos941 More formally, let eM be the elementary tree pair rooted at the closest aligned ancestor i′ of node i when it is unaligned; and let eA and eB be the elementary tree pairs rooted at i′ and i respectively when i is aligned with some target node j or ϵ. Then, by exchangeability of the elementary trees sharing the same root label, and using (4), we have p(unalign) = neM + αcM P0(eM | cM) ncM + αcM (5) p(align with j) = neA + αcAP0(eA | cA) ncA + αcA (6) × neB + αcBP0(eB | cB) ncB + αcB (7) where the counts ne·, nc· are with respect to the current derivation of the rest of the corpus; except for neB, ncB we also make sure to account for having generated eA. See Figure 2 for an illustration of the sampling updates. It is important to note that the sampler described can move from any derivation to any other derivation with positive probability (if only, for example, by virtue of fully merging and then resegmenting), which guarantees convergence to the posterior (3). However some of these transition probabilities can be extremely small due to passing through low probability states with large elementary trees; in turn, the sampling procedure is prone to local modes. In order to counteract this and to improve mixing we used simulated annealing. The probability mass function (5-7) was raised to the power 1/T with T dropping linearly from T = 5 to T = 0. Furthermore, using a final temperature of zero, we recover a maximum a posteriori (MAP) estimate which we denote eMAP. 3.3 Prediction We discuss the problem of predicting a target tree tN+1,t that corresponds to a source tree tN+1,s unseen in the observed corpus t. The maximum probability tree (MPT) can be found by considering all possible ways to derive it. However a much simpler alternative is to choose the target tree implied by the maximum probability derivasible to prove that our approach is equivalent up to a rescaling of the concentration parameters. Since we fit these parameters to the data, our approach is equivalent. tion (MPD), which we define as e∗ = argmax e p(e | ts, t) = argmax e X e p(e | ts, e)p(e | t) where e denotes a derivation for t = ts/tt. (We suppress the N + 1 subscripts for brevity.) We approximate this objective first by substituting δeMAP(e) for p(e | t) and secondly using a finite STSG model for the infinite p(e | ts, eMAP), which we obtain simply by normalizing the rule counts in eMAP. We use dynamic programming for parsing under this finite model (Eisner, 2003).3 Unfortunately, this approach does not ensure that the test instances are parsable, since ts may include unseen structure or novel words. A workaround is to include all zero-count context free copy rules such as NP / NP →(NP NP[1] PP[2]) / (NP NP[1] PP[2]) NP / ϵ →(NP NP[ϵ] PP[ϵ]) / ϵ in order to smooth our finite model. We used Laplace smoothing (adding 1 to all counts) as it gave us interpretable results. 4 Evaluation We compared the Gibbs sampling compressor (GS) against a version of maximum a posteriori EM (with Dirichlet parameter greater than 1) and a discriminative STSG based on SVM training (Cohn and Lapata, 2008) (SVM). EM is a natural benchmark, while SVM is also appropriate since it can be taken as the state of the art for our task.4 We used a publicly available extractive sentence compression corpus: the Broadcast News compressions corpus (BNC) of Clarke and Lapata (2006a). This corpus consists of 1370 sentence pairs that were manually created from transcribed Broadcast News stories. We split the pairs into training, development, and testing sets of 1000, 3We experimented with MPT using Monte Carlo integration over possible derivations; the results were not significantly different from those using MPD. 4The comparison system described by Cohn and Lapata (2008) attempts to solve a more general problem than ours, abstractive sentence compression. However, given the nature of the data that we provided, it can only learn to compress by deleting words. Since the system is less specialized to the task, their model requires additional heuristics in decoding not needed for extractive compression, which might cause a reduction in performance. Nonetheless, because the comparison system is a generalization of the extractive SVM compressor of Cohn and Lapata (2007), we do not expect that the results would differ qualitatively. 942 SVM EM GS Precision 55.60 58.80 58.94 Recall 53.37 56.58 64.59 Relational F1 54.46 57.67 61.64 Compression rate 59.72 64.11 65.52 Table 1: Precision, recall, relational F1 and compression rate (%) for various systems on the 200sentence BNC test set. The compression rate for the gold standard was 65.67%. SVM EM GS Gold Grammar 2.75† 2.85∗ 3.69 4.25 Importance 2.85 2.67∗ 3.41 3.82 Comp. rate 68.18 64.07 67.97 62.34 Table 2: Average grammar and importance scores for various systems on the 20-sentence subsample. Scores marked with ∗are significantly different than the corresponding GS score at α < .05 and with † at α < .01 according to post-hoc Tukey tests. ANOVA was significant at p < .01 both for grammar and importance. 170, and 200 pairs, respectively. The corpus was parsed using the Stanford parser (Klein and Manning, 2003). In our experiments with the publicly available SVM system we used all except paraphrasal rules extracted from bilingual corpora (Cohn and Lapata, 2008). The model chosen for testing had parameter for trade-off between training error and margin set to C = 0.001, used margin rescaling, and Hamming distance over bags of tokens with brevity penalty for loss function. EM used a subset of the rules extracted by SVM, namely all rules except non-head deleting compression rules, and was initialized uniformly. Each EM instance was characterized by two parameters: α, the smoothing parameter for MAP-EM, and δ, the smoothing parameter for augmenting the learned grammar with rules extracted from unseen data (add(δ −1) smoothing was used), both of which were fit to the development set using grid-search over (1, 2]. The model chosen for testing was (α, δ) = (1.0001, 1.01). GS was initialized at a random derivation. We sampled the alignments of the source nodes in random order. The sampler was run for 5000 iterations with annealing. All hyperparameters αc, βc were held constant at α, β for simplicity and were fit using grid-search over α ∈[10−6, 106], β ∈ [10−3, 0.5]. The model chosen for testing was (α, β) = (100, 0.1). As an automated metric of quality, we compute F-score based on grammatical relations (relational F1, or RelF1) (Riezler et al., 2003), by which the consistency between the set of predicted grammatical relations and those from the gold standard is measured, which has been shown by Clarke and Lapata (2006b) to correlate reliably with human judgments. We also conducted a small human subjective evaluation of the grammaticality and informativeness of the compressions generated by the various methods. 4.1 Automated evaluation For all three systems we obtained predictions for the test set and used the Stanford parser to extract grammatical relations from predicted trees and the gold standard. We computed precision, recall, RelF1 (all based on grammatical relations), and compression rate (percentage of the words that are retained), which we report in Table 1. The results for GS are averages over five independent runs. EM gives a strong baseline since it already uses rules that are limited in depth and number of frontier nodes by stipulation, helping with the overfitting we have mentioned, surprisingly outperforming its discriminative counterpart in both precision and recall (and consequently RelF1). GS however maintains the same level of precision as EM while improving recall, bringing an overall improvement in RelF1. 4.2 Human evaluation We randomly subsampled our 200-sentence test set for 20 sentences to be evaluated by human judges through Amazon Mechanical Turk. We asked 15 self-reported native English speakers for their judgments of GS, EM, and SVM output sentences and the gold standard in terms of grammaticality (how fluent the compression is) and importance (how much of the meaning of and important information from the original sentence is retained) on a scale of 1 (worst) to 5 (best). We report in Table 2 the average scores. EM and SVM perform at very similar levels, which we attribute to using the same set of rules, while GS performs at a level substantially better than both, and much closer to human performance in both criteria. The 943 Figure 3: RelF1, precision, recall plotted against compression rate for GS, EM, VB. human evaluation indicates that the superiority of the Bayesian nonparametric method is underappreciated by the automated evaluation metric. 4.3 Discussion The fact that GS performs better than EM can be attributed to two reasons: (1) GS uses a sparse prior and selects a compact representation of the data (grammar sizes ranged from 4K-7K for GS compared to a grammar of about 35K rules for EM). (2) GS does not commit to a precomputed grammar and searches over the space of all grammars to find one that bests represents the corpus. It is possible to introduce DP-like sparsity in EM using variational Bayes (VB) training. We experiment with this next in order to understand how dominant the two factors are. The VB algorithm requires a simple update to the M-step formulas for EM where the expected rule counts are normalized, such that instead of updating the rule weight in the t-th iteration as in the following θt+1 c,e = nc,e + α −1 nc,. + Kα −K where nc,e represents the expected count of rule c →e, and K is the total number of ways to rewrite c, we now take into account our DP(αc, P0(· | c)) prior in (1), which, when truncated to a finite grammar, reduces to a K-dimensional Dirichlet prior with parameter αcP0(· | c). Thus in VB we perform a variational E-step with the subprobabilities given by θt+1 c,e = exp (Ψ(nc,e + αcP0(e | c))) exp (Ψ(nc,. + αc)) where Ψ denotes the digamma function. (Liu and Gildea, 2009) (See MacKay (1997) for details.) Hyperparameters were handled the same way as for GS. Instead of selecting a single model on the development set, here we provide the whole spectrum of models and their performances in order to better understand their comparative behavior. In Figure 3 we plot RelF1 on the test set versus compression rate and compare GS, EM, and VB (β = 0.1 fixed, (α, δ) ranging in [10−6, 106]×(1, 2]). Overall, we see that GS maintains roughly the same level of precision as EM (despite its larger compression rates) while achieving an improvement in recall, consequently performing at a higher RelF1 level. We note that VB somewhat bridges the gap between GS and EM, without quite reaching GS performance. We conclude that the mitigation of the two factors (narrowness and overfitting) both contribute to the performance gain of GS.5 4.4 Example rules learned In order to provide some insight into the grammar extracted by GS, we list in Tables (3) and (4) high 5We have also experimented with VB with parametric independent symmetric Dirichlet priors. The results were similar to EM with the exception of sparse priors resulting in smaller grammars and slightly improving performance. 944 (ROOT (S CC[ϵ] NP[1] VP[2] .[3])) / (ROOT (S NP[1] VP[2] .[3])) (ROOT (S NP[1] ADVP[ϵ] VP[2] (. .))) / (ROOT (S NP[1] VP[2] (. .))) (ROOT (S ADVP[ϵ] (, ,) NP[1] VP[2] (. .))) / (ROOT (S NP[1] VP[2] (. .))) (ROOT (S PP[ϵ] (, ,) NP[1] VP[2] (. .))) / (ROOT (S NP[1] VP[2] (. .))) (ROOT (S PP[ϵ] ,[ϵ] NP[1] VP[2] .[3])) / (ROOT (S NP[1] VP[2] .[3])) (ROOT (S NP[ϵ] (VP VBP[ϵ] (SBAR (S NP[1] VP[2]))) .[3])) / (ROOT (S NP[1] VP[2] .[3])) (ROOT (S ADVP[ϵ] NP[1] (VP MD[2] VP[3]) .[4])) / (ROOT (S NP[1] (VP MD[2] VP[3]) .[4])) (ROOT (S (SBAR (IN as) S[ϵ]) ,[ϵ] NP[1] VP[2] .[3])) / (ROOT (S NP[1] VP[2] .[3])) (ROOT (S S[ϵ] (, ,) CC[ϵ] (S NP[1] VP[2]) .[3])) / (ROOT (S NP[1] VP[2] .[3])) (ROOT (S PP[ϵ] NP[1] VP[2] .[3])) / (ROOT (S NP[1] VP[2] .[3])) (ROOT (S S[1] (, ,) CC[ϵ] S[2] (. .))) / (ROOT (S NP[1] VP[2] (. .))) (ROOT (S S[ϵ] ,[ϵ] NP[1] ADVP[2] VP[3] .[4])) / (ROOT (S NP[1] ADVP[2] VP[3] .[4])) (ROOT (S (NP (NP NNP[ϵ] (POS ’s)) NNP[1] NNP[2]) / (ROOT (S (NP NNP[1] NNP[2]) (VP (VBZ reports)) .[3])) (VP (VBZ reports)) .[3])) Table 3: High probability ROOT / ROOT compression rules from the final state of the sampler. (S NP[1] ADVP[ϵ] VP[2]) / (S NP[1] VP[2]) (S INTJ[ϵ] (, ,) NP[1] VP[2] (. .)) / (S NP[1] VP[2] (. .)) (S (INTJ (UH Well)) ,[ϵ] NP[1] VP[2] .[3]) / (S NP[1] VP[2] .[3]) (S PP[ϵ] (, ,) NP[1] VP[2]) / (S NP[1] VP[2]) (S ADVP[ϵ] (, ,) S[1] (, ,) (CC but) S[2] .[3]) / (S S[1] (, ,) (CC but) S[2] .[3]) (S ADVP[ϵ] NP[1] VP[2]) / (S NP[1] VP[2]) (S NP[ϵ] (VP VBP[ϵ] (SBAR (IN that) (S NP[1] VP[2]))) (. .)) / (S NP[1] VP[2] (. .)) (S NP[ϵ] (VP VBZ[ϵ] ADJP[ϵ] SBAR[1])) / S[1] (S CC[ϵ] PP[ϵ] (, ,) NP[1] VP[2] (. .)) / (S NP[1] VP[2] (. .)) (S NP[ϵ] (, ,) NP[1] VP[2] .[3]) / (S NP[1] VP[2] .[3]) (S NP[1] (, ,) ADVP[ϵ] (, ,) VP[2]) / (S NP[1] VP[2]) (S CC[ϵ] (NP PRP[1]) VP[2]) / (S (NP PRP[1]) VP[2]) (S ADVP[ϵ] ,[ϵ] PP[ϵ] ,[ϵ] NP[1] VP[2] .[3]) / (S NP[1] VP[2] .[3]) (S ADVP[ϵ] (, ,) NP[1] VP[2]) / (S NP[1] VP[2]) Table 4: High probability S / S compression rules from the final state of the sampler. probability subtree-deletion rules expanding categories ROOT / ROOT and S / S, respectively. Of especial interest are deep lexicalized rules such as a pattern of compression used many times in the BNC in sentence pairs such as “NPR’s Anne Garrels reports” / “Anne Garrels reports”. Such an informative rule with nontrivial collocation (between the possessive marker and the word “reports”) would be hard to extract heuristically and can only be extracted by reasoning across the training examples. 5 Conclusion We explored nonparametric Bayesian learning of non-isomorphic tree mappings using Dirichlet process priors. We used the task of extractive sentence compression as a testbed to investigate the effects of sparse priors and nonparametric inference over the space of grammars. We showed that, despite its degeneracy, expectation maximization is a strong baseline when given a reasonable grammar. However, Gibbs-sampling– based nonparametric inference achieves improvements against this baseline. Our investigation with variational Bayes showed that the improvement is due both to finding sparse grammars (mitigating overfitting) and to searching over the space of all grammars (mitigating narrowness). Overall, we take these results as being encouraging for STSG induction via Bayesian nonparametrics for monolingual translation tasks. The future for this work would involve natural extensions such as mixing over the space of word alignments; this would allow application to MT-like tasks where flexible word reordering is allowed, such as abstractive sentence compression and paraphrasing. References James Clarke and Mirella Lapata. 2006a. Constraintbased sentence compression: An integer programming approach. In Proceedings of the 21st Interna945 tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 144–151, Sydney, Australia, July. Association for Computational Linguistics. James Clarke and Mirella Lapata. 2006b. Models for sentence compression: A comparison across domains, training requirements and evaluation measures. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 377–384, Sydney, Australia, July. Association for Computational Linguistics. Trevor Cohn and Phil Blunsom. 2009. A Bayesian model of syntax-directed tree to string grammar induction. In EMNLP ’09: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 352–361, Morristown, NJ, USA. Association for Computational Linguistics. Trevor Cohn and Mirella Lapata. 2007. Large margin synchronous generation and its application to sentence compression. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and on Computational Natural Language Learning, pages 73–82, Prague. Association for Computational Linguistics. Trevor Cohn and Mirella Lapata. 2008. Sentence compression beyond word deletion. In COLING ’08: Proceedings of the 22nd International Conference on Computational Linguistics, pages 137–144, Manchester, United Kingdom. Association for Computational Linguistics. Trevor Cohn, Sharon Goldwater, and Phil Blunsom. 2009. Inducing compact but accurate treesubstitution grammars. In NAACL ’09: Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 548–556, Morristown, NJ, USA. Association for Computational Linguistics. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39 (Series B):1–38. John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why generative phrase models underperform surface heuristics. In StatMT ’06: Proceedings of the Workshop on Statistical Machine Translation, pages 31–38, Morristown, NJ, USA. Association for Computational Linguistics. John DeNero, Alexandre Bouchard-Cˆot´e, and Dan Klein. 2008. Sampling alignment structure under a Bayesian translation model. In EMNLP ’08: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 314–323, Morristown, NJ, USA. Association for Computational Linguistics. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In ACL ’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 205–208, Morristown, NJ, USA. Association for Computational Linguistics. Michel Galley and Kathleen McKeown. 2007. Lexicalized Markov grammars for sentence compression. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 180–187, Rochester, New York, April. Association for Computational Linguistics. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 273–280, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. S. Geman and D. Geman. 1984. Stochastic Relaxation, Gibbs Distributions and the Bayesian Restoration of Images. pages 6:721–741. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2006. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 673–680, Sydney, Australia, July. Association for Computational Linguistics. Hongyan Jing. 2000. Sentence reduction for automatic text summarization. In Proceedings of the sixth conference on Applied natural language processing, pages 310–315, Morristown, NJ, USA. Association for Computational Linguistics. Dan Klein and Christopher D. Manning. 2003. Fast exact inference with a factored model for natural language parsing. In Advances in Neural Information Processing Systems 15 (NIPS, pages 3–10. MIT Press. Kevin Knight and Daniel Marcu. 2002. Summarization beyond sentence extraction: a probabilistic approach to sentence compression. Artif. Intell., 139(1):91–107. K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the InsideOutside algorithm. Computer Speech and Language, 4:35–56. Ding Liu and Daniel Gildea. 2009. Bayesian learning of phrasal tree-to-string templates. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1308–1317, Singapore, August. Association for Computational Linguistics. 946 David J.C. MacKay. 1997. Ensemble learning for hidden markov models. Technical report, Cavendish Laboratory, Cambridge, UK. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Comput. Linguist., 30(4):417–449. Matt Post and Daniel Gildea. 2009. Bayesian learning of a tree substitution grammar. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, pages 45–48, Suntec, Singapore, August. Association for Computational Linguistics. Stefan Riezler, Tracy H. King, Richard Crouch, and Annie Zaenen. 2003. Statistical sentence condensation using ambiguity packing and stochastic disambiguation methods for lexical-functional grammar. In NAACL ’03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 118–125, Morristown, NJ, USA. Association for Computational Linguistics. Jenine Turner and Eugene Charniak. 2005. Supervised and unsupervised learning for sentence compression. In ACL ’05: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 290–297, Morristown, NJ, USA. Association for Computational Linguistics. Elif Yamangil and Rani Nelken. 2008. Mining wikipedia revision histories for improving sentence compression. In Proceedings of ACL-08: HLT, Short Papers, pages 137–140, Columbus, Ohio, June. Association for Computational Linguistics. 947
2010
96
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 948–957, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Contextualizing Semantic Representations Using Syntactically Enriched Vector Models Stefan Thater and Hagen Fürstenau and Manfred Pinkal Department of Computational Linguistics Saarland University {stth, hagenf, pinkal}@coli.uni-saarland.de Abstract We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task. 1 Introduction In the logical paradigm of natural-language semantics originating from Montague (1973), semantic structure, composition and entailment have been modelled to an impressive degree of detail and formal consistency. These approaches, however, lack coverage and robustness, and their impact on realistic natural-language applications is limited: The logical framework suffers from overspecificity, and is inappropriate to model the pervasive vagueness, ambivalence, and uncertainty of natural-language semantics. Also, the handcrafting of resources covering the huge amounts of content which are required for deep semantic processing is highly inefficient and expensive. Co-occurrence-based semantic vector models offer an attractive alternative. In the standard approach, word meaning is represented by feature vectors, with large sets of context words as dimensions, and their co-occurrence frequencies as values. Semantic similarity information can be acquired using unsupervised methods at virtually no cost, and the information gained is soft and gradual. Many NLP tasks have been modelled successfully using vector-based models. Examples include information retrieval (Manning et al., 2008), wordsense discrimination (Schütze, 1998) and disambiguation (McCarthy and Carroll, 2003), to name but a few. Standard vector-space models have serious limitations, however: While semantic information is typically encoded in phrases and sentences, distributional semantics, in sharp contrast to logic-based semantics, does not offer any natural concept of compositionality that would allow the semantics of a complex expression to be computed from the meaning of its parts. A different, but related problem is caused by word-sense ambiguity and contextual variation of usage. Frequency counts of context words for a given target word provide invariant representations averaging over all different usages of the target word. There is no obvious way to distinguish the different senses of e.g. acquire in different contexts, such as acquire knowledge or acquire shares. Several approaches for word-sense disambiguation in the framework of distributional semantics have been proposed in the literature (Schütze, 1998; McCarthy and Carroll, 2003). In contrast to these approaches, we present a method to model the mutual contextualization of words in a phrase in a compositional way, guided by syntactic structure. To some extent, our method resembles the approaches proposed by Mitchell and Lapata (2008) and Erk and Padó (2008). We go one step further, however, in that we employ syntactically enriched vector models as the basic meaning representations, assuming a vector space spanned by combinations of dependency relations and words (Lin, 1998). This allows us to model the semantic interaction between the meaning of a head word and its dependent at the micro-level of relation-specific cooccurrence frequencies. It turns out that the benefit to precision is considerable. Using syntactically enriched vector models raises problems of different kinds: First, the use 948 of syntax increases dimensionality and thus may cause data sparseness (Padó and Lapata, 2007). Second, the vectors of two syntactically related words, e.g., a target verb acquire and its direct object knowledge, typically have different syntactic environments, which implies that their vector representations encode complementary information and there is no direct way of combining the information encoded in the respective vectors. To solve these problems, we build upon previous work (Thater et al., 2009) and propose to use syntactic second-order vector representations. Second-order vector representations in a bag-ofwords setting were first used by Schütze (1998); in a syntactic setting, they also feature in Dligach and Palmer (2008). For the problem at hand, the use of second-order vectors alleviates the sparseness problem, and enables the definition of vector space transformations that make the distributional information attached to words in different syntactic positions compatible. Thus, it allows vectors for a predicate and its arguments to be combined in a compositional way. We conduct two experiments to assess the suitability of our method. Our first experiment is carried out on the SemEval 2007 lexical substitution task dataset (McCarthy and Navigli, 2007). It will show that our method significantly outperforms other unsupervised methods that have been proposed in the literature to rank words with respect to their semantic similarity in a given linguistic context. In a second experiment, we apply our model to the “word sense similarity task” recently proposed by Erk and McCarthy (2009), which is a refined variant of a word-sense disambiguation task. The results show a substantial positive effect. Plan of the paper. We will first review related work in Section 2, before presenting our model in Section 3. In Sections 4 and 5 we evaluate our model on the two different tasks. Section 6 concludes. 2 Related Work Several approaches to contextualize vector representations of word meaning have been proposed. One common approach is to represent the meaning of a word a in context b simply as the sum, or centroid of a and b (Landauer and Dumais, 1997). Kintsch (2001) considers a variant of this simple model. By using vector representations of a predicate p and an argument a, Kintsch identifies words that are similar to p and a, and takes the centroid of these words’ vectors to be the representation of the complex expression p(a). Mitchell and Lapata (2008), henceforth M&L, propose a general framework in which meaning representations for complex expressions are computed compositionally by combining the vector representations of the individual words of the complex expression. They focus on the assessment of different operations combining the vectors of the subexpressions. An important finding is that component-wise multiplication outperforms the more common addition method. Although their composition method is guided by syntactic structure, the actual instantiations of M&L’s framework are insensitive to syntactic relations and word-order, assigning identical representation to dog bites man and man bites dog (see Erk and Padó (2008) for a discussion). Also, they use syntax-free bag-of-words-based vectors as basic representations of word meaning. Erk and Padó (2008), henceforth E&P, represent the meaning of a word w through a collection of vectors instead of a single vector: They assume selectional preferences and inverse selectional preferences to be constitutive parts of the meaning in addition to the meaning proper. The interpretation of a word p in context a is a combination of p’s meaning with the (inverse) selectional preference of a. Thus, a verb meaning does not combine directly with the meaning of its object noun, as on the M&L account, but with the centroid of the vectors of the verbs to which the noun can stand in an object relation. Clearly, their approach is sensitive to syntactic structure. Their evaluation shows that their model outperforms the one proposed by M&L on a lexical substitution task (see Section 4). The basic vectors, however, are constructed in a word space similar to the one of the M&L approach. In Thater et al. (2009), henceforth TDP, we took up the basic idea from E&P of exploiting selectional preference information for contextualization. Instead of using collections of different vectors, we incorporated syntactic information by assuming a richer internal structure of the vector representations. In a small case study, moderate improvements over E&P on a lexical substitution task could be shown. In the present paper, we formulate a general model of syntactically informed contextualization and show how to apply it to a number a of representative lexical substitution tasks. Evaluation shows significant improvements over TDP 949 acquireVB purchaseVB gainVB shareNN knowlegeNN obj, 5 obj, 3 obj, 6 obj, 7 skillNN buy-backNN conj, 2 nn, 1 Figure 1: Co-occurrence graph of a small sample corpus of dependency trees. and E&P. 3 The model In this section, we present our method of contextualizing semantic vector representations. We first give an overview of the main ideas, which is followed by a technical description of first-order and second-order vectors (Section 3.2) and the contextualization operation (Section 3.3). 3.1 Overview Our model employs vector representations for words and expressions containing syntax-specific first and second order co-occurrences information. The basis for the construction of both kinds of vector representations are co-occurrence graphs. Figure 1 shows the co-occurrence graph of a small sample corpus of dependency trees: Words are represented as nodes in the graph, possible dependency relations between them are drawn as labeled edges, with weights corresponding to the observed frequencies. From this graph, we can directly read off the first-order vector for every word w: the vector’s dimensions correspond to pairs (r,w′) of a grammatical relation and a neighboring word, and are assigned the frequency count of (w,r,w′). The noun knowledge, for instance, would be represented by the following vector: ⟨5(OBJ−1,gain),2(CONJ−1,skill),3(OBJ−1,acquire),...⟩ This vector talks about the possible dependency heads of knowledge and thus can be seen as the (inverse) selectional preference of knowledge (see Erk and Padó (2008)). As soon as we want to compute a meaning representation for a phrase like acquire knowledge from the verb acquire together with its direct object knowledge, we are facing the problem that verbs have different syntactic neighbors than nouns, hence their first-order vectors are not easily comparable. To solve this problem we additionally introduce another kind of vectors capturing informations about all words that can be reached with two steps in the co-occurrence graph. Such a path is characterized by two dependency relations and two words, i.e., a quadruple (r,w′,r′,w′′), whose weight is the product of the weights of the two edges used in the path. To avoid overly sparse vectors we generalize over the “middle word” w′ and build our second-order vectors on the dimensions corresponding to triples (r,r′,w′′) of two dependency relations and one word at the end of the twostep path. For instance, the second-order vector for acquire is ⟨15(OBJ,OBJ−1,gain), 6(OBJ,CONJ−1,skill), 6(OBJ,OBJ−1,buy-back), 42(OBJ,OBJ−1,purchase),...⟩ In this simple example, the values are the products of the edge weights on each of the paths. The method of computation is detailed in Section 3.2. Note that second order vectors in particular contain paths of the form (r,r−1,w′), relating a verb w to other verbs w′ which are possible substitution candidates. With first- and second-order vectors we can now model the interaction of semantic information within complex expressions. Given a pair of words in a particular grammatical relation like acquire knowledge, we contextualize the secondorder vector of acquire with the first-order vector of knowledge. We let the first-order vector with its selectional preference information act as a kind of weighting filter on the second-order vector, and thus refine the meaning representation of the verb. The actual operation we will use is pointwise multiplication, which turned out to be the best-performing one for our purpose. Interestingly, Mitchell and Lapata (2008) came to the same result in a different setting. In our example, we obtain a new second-order vector for acquire in the context of knowledge: ⟨75(OBJ,OBJ−1,gain), 12(OBJ,CONJ−1,skill), 0(OBJ,OBJ−1,buy-back), 0(OBJ,OBJ−1,purchase),...⟩ Note that all dimensions that are not “licensed” by the argument knowledge are filtered out as they are multiplied with 0. Also, contextualisation of acquire with the argument share instead of knowledge 950 would have led to a very different vector, which reflects the fact that the two argument nouns induce different readings of the inherently ambiguous acquire. 3.2 First and second-order vectors Assuming a set W of words and a set R of dependency relation labels, we consider a Euclidean vector space V1 spanned by the set of orthonormal basis vectors {⃗er,w′ | r ∈R,w′ ∈W}, i.e., a vector space whose dimensions correspond to pairs of a relation and a word. Recall that any vector of V1 can be represented as a finite sum of the form ∑ai⃗er,w′ with appropriate scalar factors ai. In this vector space we define the first-order vector [w] of a word w as follows: [w] = ∑ r∈R w′∈W ω(w,r,w′)·⃗er,w′ where ω is a function that assigns the dependency triple (w,r,w′) a corresponding weight. In the simplest case, ω would denote the frequency in a corpus of dependency trees of w occurring together with w′ in relation r. In the experiments reported below, we use pointwise mutual information (Church and Hanks, 1990) instead as it proved superior to raw frequency counts: pmi(w,r,w′) = log p(w,w′ | r) p(w | r)p(w′ | r) We further consider a similarly defined vector space V2, spanned by an orthonormal basis {⃗er,r′,w′ | r,r′ ∈R,w′ ∈W}. Its dimensions therefore correspond to triples of two relations and a word. Evidently this is a higher dimensional space than V1, which therefore can be embedded into V2 by the “lifting maps” Lr : V1 ,→V2 defined by Lr(⃗er′,w′) :=⃗er,r′,w′ (and by linear extension therefore on all vectors of V1). Using these lifting maps we define the second-order vector [[w]] of a word w as [[w]] = ∑ r∈R w′∈W ω(w,r,w′)·Lr [w′]  Substituting the definitions of Lr and [w′], this yields [[w]] = ∑ r,r′∈R w′′∈W ∑ w′∈W ω(w,r,w′)ω(w′,r′,w′′) ! ⃗er,r′,w′′ which shows the generalization over w′ in form of the inner sum. For example, if w is a verb, r = OBJ and r′ = OBJ−1 (i.e., the inverse object relation), then the coefficients of ⃗er,r′,w′′ in [[w]] would characterize the distribution of verbs w′′ which share objects with w. 3.3 Composition Both first and second-order vectors are defined for lexical expressions only. In order to represent the meaning of complex expressions we need to combine the vectors for grammatically related words in a given sentence. Given two words w and w′ in relation r we contextualize the second-order vector of w with the r-lifted first-order vector of w′: [[wr:w′]] = [[w]]×Lr([w′]) Here × may denote any operator on V2. The objective is to incorporate (inverse) selectional preference information from the context (r,w′) in such a way as to identify the correct word sense of w. This suggests that the dimensions of [[w]] should be filtered so that only those compatible with the context remain. A more flexible approach than simple filtering, however, is to re-weight those dimensions with context information. This can be expressed by pointwise vector multiplication (in terms of the given basis of V2). We therefore take × to be pointwise multiplication. To contextualize (the vector of) a word w with multiple words w1,...,wn and corresponding relations r1,...,rn, we compute the sum of the results of the pairwise contextualizations of the target vector with the vectors of the respective dependents: [[wr1:w1,...,rn:wn]] = n ∑ k=1 [[wrk:wk]] 4 Experiments: Ranking Paraphrases In this section, we evaluate our model on a paraphrase ranking task. We consider sentences with an occurrence of some target word w and a list of paraphrase candidates w1,...,wk such that each of the wi is a paraphrase of w for some sense of w. The task is to decide for each of the paraphrase candidates wi how appropriate it is as a paraphrase of w in the given context. For instance, buy, purchase and obtain are all paraphrases of acquire, in the sense that they can be substituted for acquire in some contexts, but purchase and buy are not paraphrases of acquire in the first sentence of Table 1. 951 Sentence Paraphrases Teacher education students will acquire the knowledge and skills required to [. . . ] gain 4; amass 1; receive 1; obtain 1 Ontario Inc. will [. . . ] acquire the remaining IXOS shares [. . . ] buy 3; purchase 1; gain 1; get 1; procure 2; obtain 1 Table 1: Two examples from the lexical substitution task data set 4.1 Resources We use a vector model based on dependency trees obtained from parsing the English Gigaword corpus (LDC2003T05). The corpus consists of news from several newswire services, and contains over four million documents. We parse the corpus using the Stanford parser1 (de Marneffe et al., 2006) and a non-lexicalized parser model, and extract over 1.4 billion dependency triples for about 3.9 million words (lemmas) from the parsed corpus. To evaluate the performance of our model, we use various subsets of the SemEval 2007 lexical substitution task (McCarthy and Navigli, 2007) dataset. The complete dataset contains 10 instances for each of 200 target words—nouns, verbs, adjectives and adverbs—in different sentential contexts. Systems that participated in the task had to generate paraphrases for every instance, and were evaluated against a gold standard containing up to 10 possible paraphrases for each of the individual instances. There are two natural subtasks in generating paraphrases: identifying paraphrase candidates and ranking them according to the context. We follow E&P and evaluate it only on the second subtask: we extract paraphrase candidates from the gold standard by pooling all annotated gold-standard paraphrases for all instances of a verb in all contexts, and use our model to rank these paraphrase candidates in specific contexts. Table 1 shows two instances of the target verb acquire together with its paraphrases in the gold standard as an example. The paraphrases are attached with weights, which correspond to the number of times they have been given by different annotators. 4.2 Evaluation metrics To evaluate the performance of our method we use generalized average precision (Kishida, 2005), a 1We use version 1.6 of the parser. We modify the dependency trees by “folding” prepositions into the edge labels to make the relation between a head word and the head noun of a prepositional phrase explicit. variant of average precision. Average precision (Buckley and Voorhees, 2000) is a measure commonly used to evaluate systems that return ranked lists of results. Generalized average precision (GAP) additionally rewards the correct order of positive cases w.r.t. their gold standard weight. We define average precision first: AP = Σn i=1xi pi R pi = Σi k=1xk i where xi is a binary variable indicating whether the ith item as ranked by the model is in the gold standard or not, R is the size of the gold standard, and n is the number of paraphrase candidates to be ranked. If we take xi to be the gold standard weight of the ith item or zero if it is not in the gold standard, we can define generalized average precision as follows: GAP = ∑n i=1 I(xi) pi ∑R i=1 I(yi)yi where I(xi) = 1 if xi is larger than zero, zero otherwise, and yi is the average weight of the ideal ranked list y1,...,yi of gold standard paraphrases. As a second scoring method, we use precision out of ten (P10). The measure is less discriminative than GAP. We use it because we want to compare our model with E&P. P10 measures the percentage of gold-standard paraphrases in the top-ten list of paraphrases as ranked by the system, and can be defined as follows (McCarthy and Navigli, 2007): P10 = Σs∈M TG f(s) Σs∈G f(s) , where M is the list of 10 paraphrase candidates topranked by the model, G is the corresponding annotated gold-standard data, and f(s) is the weight of the individual paraphrases. 4.3 Experiment 1: Verb paraphrases In our first experiment, we consider verb paraphrases using the same controlled subset of the 952 lexical substitution task data that had been used by TDP in an earlier study. We compare our model to various baselines and the models of TDP and E&P, and show that our new model substantially outperforms previous work. Dataset. The dataset is identical to the one used by TDP and has been constructed in the same way as the dataset used by E&P: it contains those goldstandard instances of verbs that have—according to the analyses produced by the MiniPar parser (Lin, 1993)—an overtly realized subject and object. Gold-standard paraphrases that do not occur in the parsed British National Corpus are removed.2 In total, the dataset contains 162 instances for 34 different verbs. On average, target verbs have 20.5 substitution candidates; for individual instances of a target verb, an average of 3.9 of the substitution candidates are annotated as correct paraphrases. Below, we will refer to this dataset as “LST/SO.” Experimental procedure. To compute the vector space, we consider only a subset of the complete set of dependency triples extracted from the parsed Gigaword corpus. We experimented with various strategies, and found that models which consider all dependency triples exceeding certain pmi- and frequency thresholds perform best. Since the dataset is rather small, we use a fourfold cross-validation method for parameter tuning: We divide the dataset into four subsets, test various parameter settings on one subset and use the parameters that perform best (in terms of GAP) to evaluate the model on the three other subsets. We consider the following parameters: pmi-thresholds for the dependency triples used in the computation of the first- and second-order vectors, and frequency thresholds. The parameters differ only slightly between the four subsets, and the general tendency is that good results are obtained if a low pmi-threshold (≤2) is applied to filter dependency triples used in the computation of the second-order vectors, and a relatively high pmi-threshold (≥4) to filter dependency triples in the computation of the first-order vectors. Good performing frequency thresholds are 10 or 15. The threshold values for context vectors are slightly different: a medium pmi-threshold between 2 and 4 and a low frequency threshold of 3. To rank paraphrases in context, we compute contextualized vectors for the verb in the input sen2Both TDP and E&P use the British National Corpus. tence, i.e., a second order vector for the verb that is contextually constrained by the first order vectors of all its arguments, and compare them to the unconstrained (second-order) vectors of each paraphrase candidate, using cosine similarity.3 For the first sentence in Table 1, for example, we compute [[acquireSUBJ:student,OBJ:knowledge]] and compare it to [[gain]],[[amass]],[[buy]],[[purchase]] and so on. Baselines. We evaluate our model against a random baseline and two variants of our model: One variant (“2nd order uncontexualized”) simply uses contextually unconstrained second-order vectors to rank paraphrase candidates. Comparing the full model to this variant will show how effective our method of contextualizing vectors is. The second variant (“1st order contextualized”) represents verbs in context by their first order vectors that specify how often the verb co-occurs with its arguments in the parsed Gigaword corpus. We compare our model to this baseline to demonstrate the benefit of (contextualized) second-order vectors. As for the full model, we use pmi values rather than raw frequency counts as co-occurrence statistics. Results. For the LST/SO dataset, the generalized average precision, averaged over all instances in the dataset, is 45.94%, and the average P10 is 73.11%. Table 2 compares our model to the random baseline, the two variants of our model, and previous work. As can be seen, our model improves about 8% in terms of GAP and almost 7% in terms of P10 upon the two variants of our model, which in turn perform 10% above the random baseline. We conclude that both the use of second-order vectors, as well as the method used to contextualize them, are very effective for the task under consideration. The table also compares our model to the model of TDP and two different instantiations of E&P’s model. The results for these three models are cited from Thater et al. (2009). We can observe that our model improves about 9% in terms of GAP and about 7% in terms of P10 upon previous work. Note that the results for the E&P models are based 3Note that the context information is the same for both words. With our choice of pointwise multiplication for the composition operator × we have (⃗v1 ×⃗w)·⃗v2 =⃗v1 ·(⃗v2 ×⃗w). Therefore the choice of which word is contextualized does not strongly influence their cosine similarity, and contextualizing both should not add any useful information. On the contrary we found that it even lowers performance. Although this could be repaired by appropriately modifying the operator ×, for this experiment we stick with the easier solution of only contextualizing one of the words. 953 Model GAP P10 Random baseline 26.03 54.25 E&P (add, object) 29.93 66.20 E&P (min, subject & object) 32.22 64.86 TDP 36.54 63.32 1st order contextualized 36.09 59.35 2nd order uncontextualized 37.65 66.32 Full model 45.94 73.11 Table 2: Results of Experiment 1 on a reimplementation of E&P’s original model— the P10-scores reported by Erk and Padó (2009) range between 60.2 and 62.3, over a slightly lower random baseline. According to a paired t-test the differences are statistically significant at p < 0.01. Performance on the complete dataset. To find out how our model performs on less controlled datasets, we extracted all instances from the lexical substitution task dataset with a verb target, excluding only instances which could not be parsed by the Stanford parser, or in which the target was mistagged as a non-verb by the parser. The resulting dataset contains 496 instances. As for the LST/SO dataset, we ignore all gold-standard paraphrases that do not occur in the parsed (Gigaword) corpus. If we use the best-performing parameters from the first experiment, we obtain a GAP score of 45.17% and a P10-score of 75.43%, compared to random baselines of 27.42% (GAP) and 58.83% (P10). The performance on this larger dataset is thus almost the same compared to our results for the more controlled dataset. We take this as evidence that our model is quite robust w.r.t. different realizations of a verb’s subcategorization frame. 4.4 Experiment 2: Non-verb paraphrases We now apply our model to parts of speech (POS) other than verbs. The main difference between verbs on the one hand, and nouns, adjectives, and adverbs on the other hand, is that verbs typically come with a rich context—subject, object, and so on—while non-verbs often have either no dependents at all or only closed class dependents such as determiners which provide only limited contextual informations, if any at all. While we can apply the same method as before also to non-verbs, we might expect it to work less well due to limited contextual POS Instances M1 M2 Baseline Noun 535 46.38 42.54 30.01 Adj 508 39.41 43.21 28.32 Adv 284 48.19 51.43 37.25 Table 3: GAP-scores for non-verb paraphrases using two different methods. information. We therefore propose an alternative method to rank non-verb paraphrases: We take the secondorder vector of the target’s head and contextually constrain it by the first order vector of the target. For instance, if we want to rank the paraphrase candidates hint and star for the noun lead in the sentence (1) Meet for coffee early, swap leads and get permission to contact if possible. we compute [[swapOBJ:lead]] and compare it to the lifted first-order vectors of all paraphrase candidates, LOBJ([hint]) and LOBJ([star]), using cosine similarity. To evaluate the performance of the two methods, we extract all instances from the lexical substitution task dataset with a nominal, adjectival, or adverbial target, excluding instances with incorrect parse or no parse at all. As before, we ignore gold-standard paraphrases that do not occur in the parsed Gigaword corpus. The results are shown in Table 3, where “M1” refers to the method we used before on verbs, and “M2” refers to the alternative method described above. As one can see, M1 achieves better results than M2 if applied to nouns, while M2 is better than M1 if applied to adjectives and adverbs. The second result is unsurprising, as adjectives and adverbs often have no dependents at all. We can observe that the performance of our model is similarly strong on non-verbs. GAP scores on nouns (using M1) and adverbs are even higher than those on verbs. We take these results to show that our model can be successfully applied to all open word classes. 5 Experiment: Ranking Word Senses In this section, we apply our model to a different word sense ranking task: Given a word w in context, the task is to decide to what extent the different 954 WordNet (Fellbaum, 1998) senses of w apply to this occurrence of w. Dataset. We use the dataset provided by Erk and McCarthy (2009). The dataset contains ordinal judgments of the applicability of WordNet senses on a 5 point scale, ranging from completely different to identical for eight different lemmas in 50 different sentential contexts. In this experiment, we concentrate on the three verbs in the dataset: ask, add and win. Experimental procedure. Similar to Pennacchiotti et al. (2008), we represent different word senses by the words in the corresponding synsets. For each word sense, we compute the centroid of the second-order vectors of its synset members. Since synsets tend to be small (they even may contain only the target word itself), we additionally add the centroid of the sense’s hypernyms, scaled down by the factor 10 (chosen as a rough heuristic without any attempt at optimization). We apply the same method as in Section 4.3: For each instance in the dataset, we compute the second-order vector of the target verb, contextually constrain it by the first-order vectors of the verb’s arguments, and compare the resulting vector to the vectors that represent the different WordNet senses of the verb. The WordNet senses are then ranked according to the cosine similarity between their sense vector and the contextually constrained target verb vector. To compare the predicted ranking to the goldstandard ranking, we use Spearman’s ρ, a standard method to compare ranked lists to each other. We compute ρ between the similarity scores averaged over all three annotators and our model’s predictions. Based on agreement between human judges, Erk and McCarthy (2009) estimate an upper bound ρ of 0.544 for the dataset. Results. Table 4 shows the results of our experiment. The first column shows the correlation of our model’s predictions with the human judgments from the gold-standard, averaged over all instances. All correlations are significant (p < 0.001) as tested by approximate randomization (Noreen, 1989). The second column shows the results of a frequency-informed baseline, which predicts the ranking based on the order of the senses in WordNet. This (weakly supervised) baseline outperforms our unsupervised model for two of the three verbs. As a final step, we explored the effect of Word Present paper WN-Freq Combined ask 0.344 0.369 0.431 add 0.256 0.164 0.270 win 0.236 0.343 0.381 average 0.279 0.291 0.361 Table 4: Correlation of model predictions and human judgments combining our rankings with those of the frequency baseline, by simply computing the average ranks of those two models. The results are shown in the third column. Performance is significantly higher than for both the original model and the frequencyinformed baseline. This shows that our model captures an additional kind of information, and thus can be used to improve the frequency-based model. 6 Conclusion We have presented a novel method for adapting the vector representations of words according to their context. In contrast to earlier approaches, our model incorporates detailed syntactic information. We solved the problems of data sparseness and incompatibility of dimensions which are inherent in this approach by modeling contextualization as an interplay between first- and second-order vectors. Evaluating on the SemEval 2007 lexical substitution task dataset, our model performs substantially better than all earlier approaches, exceeding the state of the art by around 9% in terms of generalized average precision and around 7% in terms of precision out of ten. Also, our system is the first unsupervised method that has been applied to Erk and McCarthy’s (2009) graded word sense assignment task, showing a substantial positive correlation with the gold standard. We further showed that a weakly supervised heuristic, making use of WordNet sense ranks, can be significantly improved by incorporating information from our system. We studied the effect that context has on target words in a series of experiments, which vary the target word and keep the context constant. A natural objective for further research is the influence of varying contexts on the meaning of target expressions. This extension might also shed light on the status of the modelled semantic process, which we have been referring to in this paper as “contextualization”. This process can be considered one of 955 mutual disambiguation, which is basically the view of E&P. Alternatively, one can conceptualize it as semantic composition: in particular, the head of a phrase incorporates semantic information from its dependents, and the final result may to some extent reflect the meaning of the whole phrase. Another direction for further study will be the generalization of our model to larger syntactic contexts, including more than only the direct neighbors in the dependency graph, ultimately incorporating context information from the whole sentence in a recursive fashion. Acknowledgments. We would like to thank Eduard Hovy and Georgiana Dinu for inspiring discussions and helpful comments. This work was supported by the Cluster of Excellence “Multimodal Computing and Interaction”, funded by the German Excellence Initiative, and the project SALSA, funded by DFG (German Science Foundation). References Chris Buckley and Ellen M. Voorhees. 2000. Evaluating evaluation measure stability. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 33–40, Athens, Greece. Kenneth W. Church and Patrick Hanks. 1990. Word association, mutual information and lexicography. Computational Linguistics, 16(1):22–29. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the fifth international conference on Language Resources and Evaluation (LREC 2006), pages 449–454, Genoa, Italy. Dmitriy Dligach and Martha Palmer. 2008. Novel semantic features for verb sense disambiguation. In Proceedings of ACL-08: HLT, Short Papers, pages 29–32, Columbus, OH, USA. Katrin Erk and Diana McCarthy. 2009. Graded word sense assignment. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 440–449, Singapore. Katrin Erk and Sebastian Padó. 2008. A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, USA. Katrin Erk and Sebastian Padó. 2009. Paraphrase assessment in structured vector space: Exploring parameters and datasets. In Proc. of the Workshop on Geometrical Models of Natural Language Semantics, Athens, Greece. Christiane Fellbaum, editor. 1998. Wordnet: An Electronic Lexical Database. Bradford Book. Walter Kintsch. 2001. Predication. Cognitive Science, 25:173–202. Kazuaki Kishida. 2005. Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments. NII Technical Report. Thomas K. Landauer and Susan T. Dumais. 1997. A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211–240. Dekang Lin. 1993. Principle-based parsing without overgeneration. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 112–120, Columbus, OH, USA. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 2, pages 768–774. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press. Diana McCarthy and John Carroll. 2003. Disambiguating nouns, verbs, and adjectives using automatically acquired selectional preferences. Computational Linguistics, 29(4):639–654. Diana McCarthy and Roberto Navigli. 2007. SemEval2007 Task 10: English Lexical Substitution Task. In Proc. of SemEval, Prague, Czech Republic. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236–244, Columbus, OH, USA. Richard Montague. 1973. The proper treatment of quantification in ordinary English. In Jaakko Hintikka, Julius Moravcsik, and Patrick Suppes, editors, Approaches to Natural Language, pages 221–242. Dordrecht. Eric W. Noreen. 1989. Computer-intensive Methods for Testing Hypotheses: An Introduction. John Wiley and Sons Inc. Sebastian Padó and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161–199. Marco Pennacchiotti, Diego De Cao, Roberto Basili, Danilo Croce, and Michael Roth. 2008. Automatic induction of framenet lexical units. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 457–465, Honolulu, HI, USA. 956 Hinrich Schütze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97–124. Stefan Thater, Georgiana Dinu, and Manfred Pinkal. 2009. Ranking paraphrases in context. In Proceedings of the 2009 Workshop on Applied Textual Inference, pages 44–47, Singapore. 957
2010
97
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 958–967, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Bootstrapping Semantic Analyzers from Non-Contradictory Texts Ivan Titov Mikhail Kozhevnikov Saarland University Saarbr¨ucken, Germany {titov|m.kozhevnikov}@mmci.uni-saarland.de Abstract We argue that groups of unannotated texts with overlapping and non-contradictory semantics represent a valuable source of information for learning semantic representations. A simple and efficient inference method recursively induces joint semantic representations for each group and discovers correspondence between lexical entries and latent semantic concepts. We consider the generative semantics-text correspondence model (Liang et al., 2009) and demonstrate that exploiting the noncontradiction relation between texts leads to substantial improvements over natural baselines on a problem of analyzing human-written weather forecasts. 1 Introduction In recent years, there has been increasing interest in statistical approaches to semantic parsing. However, most of this research has focused on supervised methods requiring large amounts of labeled data. The supervision was either given in the form of meaning representations aligned with sentences (Zettlemoyer and Collins, 2005; Ge and Mooney, 2005; Mooney, 2007) or in a somewhat more relaxed form, such as lists of candidate meanings for each sentence (Kate and Mooney, 2007; Chen and Mooney, 2008) or formal representations of the described world state for each text (Liang et al., 2009). Such annotated resources are scarce and expensive to create, motivating the need for unsupervised or semi-supervised techniques (Poon and Domingos, 2009). However, unsupervised methods have their own challenges: they are not always able to discover semantic equivalences of lexical entries or logical forms or, on the contrary, cluster semantically different or even opposite expressions (Poon and Domingos, 2009). Unsupervised approaches can only rely on distributional similarity of contexts (Harris, 1968) to decide on semantic relatedness of terms, but this information may be sparse and not reliable (Weeds and Weir, 2005). For example, when analyzing weather forecasts it is very hard to discover in an unsupervised way which of the expressions among “south wind”, “wind from west” and “southerly” denote the same wind direction and which are not, as they all have a very similar distribution of their contexts. The same challenges affect the problem of identification of argument roles and predicates. In this paper, we show that groups of unannotated texts with overlapping and non-contradictory semantics provide a valuable source of information. This form of weak supervision helps to discover implicit clustering of lexical entries and predicates, which presents a challenge for purely unsupervised techniques. We assume that each text in a group is independently generated from a full latent semantic state corresponding to the group. Importantly, the texts in each group do not have to be paraphrases of each other, as they can verbalize only specific parts (aspects) of the full semantic state, yet statements about the same aspects must not contradict each other. Simultaneous inference of the semantic state for the noncontradictory and semantically overlapping documents would restrict the space of compatible hypotheses, and, intuitively, ‘easier’ texts in a group will help to analyze the ‘harder’ ones.1 As an illustration of why this weak supervision may be valuable, consider a group of two non-contradictory texts, where one text mentions “2.2 bn GBP decrease in profit”, whereas another one includes a passage “profit fell by 2.2 billion pounds”. Even if the model has not observed 1This view on this form of supervision is evocative of cotraining (Blum and Mitchell, 1998) which, roughly, exploits the fact that the same example can be ‘easy’ for one model but ‘hard’ for another one. 958 Current temperature is about 70F, with high of around 75F amd low of around 64. Overcast, Rain is quite possible tonight, as t-storms are. South wind of around 19 mph. 2 w w 1 3 w A slight chance of showers  Mostly cloudy,   with a high near 75. South wind between 15 and 20 mph, Chance of precipitation is 30%. with gusts as high as 30 mph. and thunderstorms after noon. Thunderstorms and pouring are possible throughout the day, with precipitation chance of about 25%. possibly growing up to 75 F during the day, as south wind blows at about 20 mph. The sky is heavy. It is 70 F now, temperature (time = 6-21; min = 64, max = 75, mean = 70) windDir(time=6-21,mode=S) gust(time=6-21, min=0, max=29, mean=25) precipPotential(time=6-21,min=20,max=32,mean=26) thunderChance(time=6-21,mode=chance) freezingRainChance(time=17-30,mode=--) sleetChance(time='6-21',mode=--) skycover(time=6-21,bucket=75-100) windSpeed(time=6-21; min=14,max=22,mean=19, bucket=10-20) rainChance(time=6-21,mode=chance) windChill(time=6-21,min=0,max=0,mean=0)                 ...... Figure 1: An example of three non-contradictory weather forecasts and their alignment to the semantic representation. Note that the semantic representation (the block in the middle) is not observable in training. the word “fell” before, it is likely to align these phrases to the same semantic form because of similarity of their arguments. And this alignment would suggest that “fell” and “decrease” refer to the same process, and should be clustered together. This would not happen for the pair “fell” and “increase” as similarity of their arguments would normally entail contradiction. Similarly, in the example mentioned earlier, when describing a forecast for a day with expected south winds, texts in the group can use either “south wind” or “southerly” to indicate this fact but no texts would verbalize it as “wind from west”, and therefore these expressions will be assigned to different semantic clusters. However, it is important to note that the phrase “wind from west” may still appear in the texts, but in reference to other time periods, underlying the need for modeling alignment between grouped texts and their latent meaning representation. As much of the human knowledge is redescribed multiple times, we believe that noncontradictory and semantically overlapping texts are often easy to obtain. For example, consider semantic analysis of news articles or biographies. In both cases we can find groups of documents referring to the same events or persons, and though they will probably focus on different aspects and have different subjective passages, they are likely to agree on the core information (Shinyama and Sekine, 2003). Alternatively, if such groupings are not available, it may still be easier to give each semantic representation (or a state) to multiple annotators and ask each of them to provide a textual description, instead of annotating texts with semantic expressions. The state can be communicated to them in a visual or audio form (e.g., as a picture or a short video clip) ensuring that their interpretations are consistent. Unsupervised learning with shared latent semantic representations presents its own challenges, as exact inference requires marginalization over possible assignments of the latent semantic state, consequently, introducing non-local statistical dependencies between the decisions about the semantic structure of each text. We propose a simple and fairly general approximate inference algorithm for probabilistic models of semantics which is efficient for the considered model, and achieves favorable results in our experiments. In this paper, we do not consider models which aim to produce complete formal meaning of text (Zettlemoyer and Collins, 2005; Mooney, 2007; Poon and Domingos, 2009), instead focusing on a simpler problem studied in (Liang et al., 2009). They investigate grounded language acquisition set-up and assume that semantics (world state) can be represented as a set of records each consisting of a set of fields. Their model segments text into utterances and identifies records, fields and field values discussed in each utterance. Therefore, one can think of this problem as an extension of the semantic role labeling problem (Carreras and Marquez, 2005), where predicates (i.e. records in our notation) and their arguments should be identified in text, but here arguments are not only assigned to a specific role (field) but also mapped to an underlying equivalence class (field value). For example, in the weather forecast domain field sky cover should get the same value given expressions “overcast” and “very cloudy” but a different one if the expres959 sions are “clear” or “sunny”. This model is hard to evaluate directly as text does not provide information about all the fields and does not necessarily provide it at the sufficient granularity level. Therefore, it is natural to evaluate their model on the database-text alignment problem (Snyder and Barzilay, 2007), i.e. measuring how well the model predicts the alignment between the text and the observable records describing the entire world state. We follow their set-up, but assume that instead of having access to the full semantic state for every training example, we have a very small amount of data annotated with semantic states and a larger number of unannotated texts with noncontradictory semantics. We study our set-up on the weather forecast data (Liang et al., 2009) where the original textual weather forecasts were complemented by additional forecasts describing the same weather states (see figure 1 for an example). The average overlap between the verbalized fields in each group of noncontradictory forecasts was below 35%, and more than 60% of fields are mentioned only in a single forecast from a group. Our model, learned from 100 labeled forecasts and 259 groups of unannotated non-contradictory forecasts (750 texts in total), achieved 73.9% F1. This compares favorably with 69.1% shown by a semi-supervised learning approach, though, as expected, does not reach the score of the model which, in training, observed semantics states for all the 750 documents (77.7% F1). The rest of the paper is structured as follows. In section 2 we describe our inference algorithm for groups of non-contradictory documents. Section 3 redescribes the semantics-text correspondence model (Liang et al., 2009) in the context of our learning scenario. In section 4 we provide an empirical evaluation of the proposed method. We conclude in section 5 with an examination of additional related work. 2 Inference with Non-Contradictory Documents In this section we will describe our inference method on a higher conceptual level, not specifying the underlying meaning representation and the probabilistic model. An instantiation of the algorithm for the semantics-text correspondence model is given in section 3.2. Statistical models of parsing can often be regarded as defining the probability distribution of meaning m and its alignment a with the given text w, P(m, a, w) = P(a, w|m)P(m). The semantics m can be represented either as a logical formula (see, e.g., (Poon and Domingos, 2009)) or as a set of field values if database records are used as a meaning representation (Liang et al., 2009). The alignment a defines how semantics is verbalized in the text w, and it can be represented by a meaning derivation tree in case of full semantic parsing (Poon and Domingos, 2009) or, e.g., by a hierarchical segmentation into utterances along with an utterance-field alignment in a more shallow variation of the problem. In semantic parsing, we aim to find the most likely underlying semantics and alignment given the text: ( ˆm, ˆa) = arg max m,a P(a, w|m)P(m). (1) In the supervised case, where a and m are observable, estimation of the generative model parameters is generally straightforward. However, in a semi-supervised or unsupervised case variational techniques, such as the EM algorithm (Dempster et al., 1977), are often used to estimate the model. As common for complex generative models, the most challenging part is the computation of the posterior distributions P(a, m|w) on the E-step which, depending on the underlying model P(m, a, w), may require approximate inference. As discussed in the introduction, our goal is to integrate groups of non-contradictory documents into the learning procedure. Let us denote by w1,..., wK a group of non-contradictory documents. As before, the estimation of the posterior probabilities P(mi, ai|w1 . . . wK) presents the main challenge. Note that the decision about mi is now conditioned on all the texts wj rather than only on wi. This conditioning is exactly what drives learning, as the information about likely semantics mj of text j affects the decision about choice of mi: P(mi|w1,..., wK) ∝ X ai P(ai, wi|mi)× × X m−i,a−i P(mi|m−i)P(m−i, a−i, w−i), (2) where x−i denotes {xj : j ̸= i}. P(mi|m−i) is the probability of the semantics mi given all the meanings m−i. This probability assigns zero weight to inconsistent meanings, i.e. such mean960 ings (m1,..., mK) that ∧K i=1mi is not satisfiable,2 and models dependencies between components in the composite meaning representation (e.g., arguments values of predicates). As an illustration, in the forecast domain it may express that clouds, and not sunshine, are likely when it is raining. Note, that this probability is different from the probability that mi is actually verbalized in the text. Unfortunately, these dependencies between mi and wj are non-local. Even though the dependencies are only conveyed via {mj : j ̸= i} the space of possible meanings m is very large even for relatively simple semantic representations, and, therefore, we need to resort to efficient approximations. One natural approach would be to use a form of belief propagation (Pearl, 1982; Murphy et al., 1999), where messages pass information about likely semantics between the texts. However, this approach is still expensive even for simple models, both because of the need to represent distributions over m and also because of the large number of iterations of message exchange needed to reach convergence (if it converges). An even simpler technique would be to parse texts in a random order conditioning each meaning m⋆ k for k ∈{1,..., K} on all the previous semantics m⋆ <k = m⋆ 1,..., m⋆ k−1: m⋆ k = arg max mk P(wk|mk)P(mk|m⋆ <k). Here, and in further discussion, we assume that the above search problem can be efficiently solved, exactly or approximately. However, a major weakness of this algorithm is that decisions about components of the composite semantic representation (e.g., argument values) are made only on the basis of a single text, which first mentions the corresponding aspects, without consulting any future texts k′ > k, and these decisions cannot be revised later. We propose a simple algorithm which aims to find an appropriate order of the greedy inference by estimating how well each candidate semantics ˆmk would explain other texts and at each step selecting k (and ˆmk) which explains them best. The algorithm, presented in figure 23, constructs an ordering of texts n = (n1,..., nK) 2Note that checking for satisfiability may be expensive or intractable depending on the formalism. 3We slightly abuse notation by using set operations with the lists n and m⋆as arguments. Also, for all the document indices j we use j /∈S to denote j ∈{1,..., K}\S. 1: n := (), m⋆:= () 2: for i := 1 : K −1 do 3: for j /∈n do 4: ˆmj := arg maxmj P(mj, wj|m⋆) 5: end for 6: ni := arg maxj /∈n P( ˆmj, wj|m⋆)× × Q k/∈n∪{j} maxmk P(mk, wk|m⋆, ˆmj) 7: m⋆ i := ˆmni 8: end for 9: nK := {1,..., K}\n 10: m⋆ K := arg maxmnK P(mnK, wnK|m⋆) Figure 2: The approximate inference algorithm. and corresponding meaning representations m⋆= (m⋆ 1,..., m⋆ K), where m⋆ k is the predicted meaning representation of text wnk. It starts with an empty ordering n = () and an empty list of meanings m⋆= () (line 1). Then it iteratively predicts meaning representations ˆmj conditioned on the list of semantics m⋆= (m⋆ 1,..., m⋆ i−1) fixed on the previous stages and does it for all the remaining texts wj (lines 3-5). The algorithm selects a single meaning ˆmj which maximizes the probability of all the remaining texts and excludes the text j from future consideration (lines 6-7). Though the semantics mk (k /∈n∪{j}) used in the estimates (line 6) can be inconsistent with each other, the final list of meanings m⋆is guaranteed to be consistent. It holds because on each iteration we add a single meaning ˆmni to m⋆(line 7), and ˆmni is guaranteed to be consistent with m⋆, as the semantics ˆmni was conditioned on the meaning m⋆during inference (line 4). An important aspect of this algorithm is that unlike usual greedy inference, the remaining (‘future’) texts do affect the choice of meaning representations made on the earlier stages. As soon as semantics m⋆ k are inferred for every k, we find ourselves in the set-up of learning with unaligned semantic states considered in (Liang et al., 2009). The induced alignments a1,..., aK of semantics m⋆to texts w1,..., wK at the same time induce alignments between the texts. The problem of producing multiple sequence alignment, especially in the context of sentence alignments, has been extensively studied in NLP (Barzilay and Lee, 2003). In this paper, we use semantic structures as a pivot for finding the best alignment in the hope that presence of meaningful text alignments will improve the quality of the resulting semantic structures by enforcing a form of agreement between them. 961 3 A Model of Semantics In this section we redescribe the semantics-text correspondence model (Liang et al., 2009) with an extension needed to model examples with latent states, and also explain how the inference algorithm defined in section 2 can be applied to this model. 3.1 Model definition Liang et al. (2009) considered a scenario where each text was annotated with a world state, even though alignment between the text and the state was not observable. This is a weaker form of supervision than the one traditionally considered in supervised semantic parsing, where the alignment is also usually provided in training (Chen and Mooney, 2008; Zettlemoyer and Collins, 2005). Nevertheless, both in training and testing the world state is observable, and the alignment and the text are conditioned on the state during inference. Consequently, there was no need to model the distribution of the world state. This is different for us, and we augment the generative story by adding a simplistic world state generation step. As explained in the introduction, the world states s are represented by sets of records (see the block in the middle of figure 1 for an example of a world state). Each record is characterized by a record type t ∈{1,..., T}, which defines the set of fields F (t). There are n(t) records of type t and this number may change from document to document. For example, there may be more than a single record of type wind speed, as they may refer to different time periods but all these records have the same set of fields, such as minimal, maximal and average wind speeds. Each field has an associated type: in our experiments we consider only categorical and integer fields. We write s(t) n,f = v to denote that n-th record of type t has field f set to value v. Each document k verbalizes a subset of the entire world state, and therefore semantics mk of the document is an assignment to |mk| verbalized fields: ∧|mk| q=1 (s(tq) nq,fq = vq), where tq, nq, fq are the verbalized record types, records and fields, respectively, and vq is the assigned field value. The probability of meaning mk then equals the probability of this assignment with other state variables left non-observable (and therefore marginalized out). In this formalism checking for contradiction is trivial: two meaning representations Figure 3: The semantics-text correspondence model with K documents sharing the same latent semantic state. contradict each other if they assign different values to the same field of the same record. The semantics-text correspondence model defines a hierarchical segmentation of text: first, it segments the text into fragments discussing different records, then the utterances corresponding to each record are further segmented into fragments verbalizing specific fields of that record. An example of a segmented fragment is presented in figure 4. The model has a designated null-record which is aligned to words not assigned to any record. Additionally there is a null-field in each record to handle words not specific to any field. In figure 3 the corresponding graphical model is presented. The formal definition of the model for documents w1,..., wK sharing a semantic state is as follows: • Generation of world state s: – For each type τ ∈{1,..., T} choose a number of records of that type n(τ) ∼Unif(1,..., nmax). – For each record s(τ) n , n ∈{1, .., n(τ)} choose field values s(τ) nf for all fields f ∈F (τ) from the type-specific distribution. • Generation of the verbalizations, for each document wk, k ∈{1,..., K}:4 – Record Types: Choose a sequence of verbalized record types t = (t1,..., t|t|) from the first-order Markov chain. – Records: For each type ti choose a verbalized record ri from all the records of that type: l ∼ Unif(1,..., n(τ)), ri := s(ti) l . – Fields: For each record ri choose a sequence of verbalized fields f i = (fi1,..., fi|f i|) from the first-order Markov chain (fij ∈F (ti)). – Length: For each field fij, choose length cij ∼ Unif(1,..., cmax). – Words: Independently generate cij words from the field-specific distribution P(w|fij, rifij). 4We omit index k in the generative story and figure 3 to simplify the notation. 962 Figure 4: A segmentation of a text fragment into records and fields. Note that, when generating fields, the Markov chain is defined over fields and the transition parameters are independent of the field values rifij. On the contrary, when drawing a word, the distribution of words is conditioned on the value of the corresponding field. The form of word generation distributions P(w|fij, rifij) depends on the type of the field fi,j. For categorical fields, the distribution of words is modeled as a distinct multinomial for each field value. Verbalizations of numerical fields are generated via a perturbation on the field value rifij: the value rifij can be perturbed by either rounding it (up or down) or distorting (up or down, modeled by a geometric distribution). The parameters corresponding to each form of generation are estimated during learning. For details on these emission models, as well as for details on modeling record and field transitions, we refer the reader to the original publication (Liang et al., 2009). In our experiments, when choosing a world state s, we generate the field values independently. This is clearly a suboptimal regime as often there are very strong dependencies between field values: e.g., in the weather domain many record types contain groups of related fields defining minimal, maximal and average values of some parameter. Extending the method to model, e.g., pairwise dependencies between field values is relatively straightforward. As explained above, semantics of a text m is defined by the assignment of state variables s. Analogously, an alignment a between semantics m and a text w is represented by all the remaining latent variables: by the sequence of record types t = (t1,..., t|t|), choice of records ri for each ti, the field sequence f i and the segment length cij for every field fij. 3.2 Learning and inference We select the model parameters θ by maximizing the marginal likelihood of the data, where the data D is given in the form of groups w = {w1,..., wK} sharing the same latent state:5 max θ Y w∈D X s P(s) Y k X r,f,c P(r, f, c, wk|s, θ). To estimate the parameters, we use the Expectation-Maximization algorithm (Dempster et al., 1977). When the world state is observable, learning does not require any approximations, as dynamic programming (a form of the forward-backward algorithm) can be used to infer the posterior distribution on the E-step (Liang et al., 2009). However, when the state is latent, dependencies are not local anymore, and approximate inference is required. We use the algorithm described in section 2 (figure 2) to infer the state. In the context of the semantics-text correspondence model, as we discussed above, semantics m defines the subset of admissible world states. In order to use the algorithm, we need to understand how the conditional probabilities of the form P(m′|m) are computed, as they play the key role in the inference procedure (see equation (2)). If there is a contradiction (m′⊥m) then P(m′|m) = 0, conversely, if m′ is subsumed by m (m →m′) then this probability is 1. Otherwise, P(m′|m) equals the probability of new assignments ∧|m′\m| q=1 (s (t′ q) n′q,f′q = v′ q) (defined by m′\m) conditioned on the previously fixed values of s (given by m). Summarizing, when predicting the most likely semantics ˆmj (line 4), for each span the decoder weighs alternatives of either (1) aligning this span to the previously induced meaning m⋆, or (2) aligning it to a new field and paying the cost of generation of its value. The exact computation of the most probable semantics (line 4 of the algorithm) is intractable, and we have to resort to an approximation. Instead of predicting the most probable semantics ˆmj we search for the most probable pair (ˆaj, ˆmj), thus assuming that the probability mass is mostly concentrated on a single alignment. The alignment aj 5For simplicity, we assume here that all the examples are unlabeled. 963 is then discarded and not used in any other computations. Though the most likely alignment ˆaj for a fixed semantic representation ˆmj can be found efficiently using a Viterbi algorithm, computing the most probable pair (ˆaj, ˆmj) is still intractable. We use a modification of the beam search algorithm, where we keep a set of candidate meanings (partial semantic representations) and compute an alignment for each of them using a form of the Viterbi algorithm. As soon as the meaning representations m⋆are inferred, we find ourselves in the set-up studied in (Liang et al., 2009): the state s is no longer latent and we can run efficient inference on the E-step. Though some fields of the state s may still not be specified by m⋆, we prohibit utterances from aligning to these non-specified fields. On the M-step of EM the parameters are estimated as proportional to the expected marginal counts computed on the E-step. We smooth the distributions of values for numerical fields with convolution smoothing equivalent to the assumption that the fields are affected by distortion in the form of a two-sided geometric distribution with the success rate parameter equal to 0.67. We use add-0.1 smoothing for all the remaining multinomial distributions. 4 Empirical Evaluation In this section, we consider the semi-supervised set-up, and present evaluation of our approach on on the problem of aligning weather forecast reports to the formal representation of weather. 4.1 Experiments To perform the experiments we used a subset of the weather dataset introduced in (Liang et al., 2009). The original dataset contains 22,146 texts of 28.7 words on average, there are 12 types of records (predicates) and 36.0 records per forecast on average. We randomly chose 100 texts along with their world states to be used as the labeled data.6 To produce groups of noncontradictory texts we have randomly selected a subset of weather states, represented them in a visual form (icons accompanied by numerical and 6In order to distinguish from completely unlabeled examples, we refer to examples labeled with world states as labeled examples. Note though that the alignments are not observable even for these labeled examples. Similarly, we call the models trained from this data supervised though full supervision was not available. symbolic parameters) and then manually annotated these illustrations. These newly-produced forecasts, when combined with the original texts, resulted in 259 groups of non-contradictory texts (650 texts, 2.5 texts per group). An example of such a group is given in figure 1. The dataset is relatively noisy: there are inconsistencies due to annotation mistakes (e.g., number distortions), or due to different perception of the weather by the annotators (e.g., expressions such as ‘warm’ or ‘cold’ are subjective). The overlap between the verbalized fields in each group was estimated to be below 35%. Around 60% of fields are mentioned only in a single forecast from a group, consequently, the texts cannot be regarded as paraphrases of each other. The test set consists of 150 texts, each corresponding to a different weather state. Note that during testing we no longer assume that documents share the state, we treat each document in isolation. We aimed to preserve approximately the same proportion of new and original examples as we had in the training set, therefore, we combined 50 texts originally present in the weather dataset with additional 100 newly-produced texts. We annotated these 100 texts by aligning each line to one or more records,7 whereas for the original texts the alignments were already present. Following Liang et al. (2009) we evaluate the models on how well they predict these alignments. When estimating the model parameters, we followed the training regime prescribed in (Liang et al., 2009). Namely, 5 iterations of EM with a basic model (with no segmentation or coherence modeling), followed by 5 iterations of EM with the model which generates fields independently and, at last, 5 iterations with the full model. Only then, in the semi-supervised learning scenarios, we added unlabeled data and ran 5 additional iterations of EM. Instead of prohibiting records from crossing punctuation, as suggested by Liang et al. (2009), in our implementation we disregard the words not attached to specific fields (attached to the nullfield, see section 3.1) when computing spans of records. To speed-up training, only a single record of each type is allowed to be generated when running inference for unlabeled examples on the E7The text was automatically tokenized and segmented into lines, with line breaks at punctuation characters. Information about the line breaks is not used during learning and inference. 964 P R F1 Supervised BL 63.3 52.9 57.6 Semi-superv BL 68.8 69.4 69.1 Semi-superv, non-contr 78.8 69.5 73.9 Supervised UB 69.4 88.6 77.9 Table 1: Results (precision, recall and F1) on the weather forecast dataset. step of the EM algorithm, as it significantly reduces the search space. Similarly, though we preserved all records which refer to the first time period, for other time periods we removed all the records which declare that the corresponding event (e.g., rain or snowfall) is not expected to happen. This preprocessing results in the oracle recall of 93%. We compare our approach (Semi-superv, noncontr) with two baselines: the basic supervised training on 100 labeled forecasts (Supervised BL) and with the semi-supervised training which disregards the non-contradiction relations (Semi-superv BL). The learning regime, the inference procedure and the texts for the semi-supervised baseline were identical to the ones used for our approach, the only difference is that all the documents were modeled as independent. Additionally, we report the results of the model trained with all the 750 texts labeled (Supervised UB), its scores can be regarded as an upper bound on the results of the semi-supervised models. The results are reported in table 1. 4.2 Discussion Our training strategy results in a substantially more accurate model, outperforming both the supervised and semi-supervised baselines. Surprisingly, its precision is higher than that of the model trained on 750 labeled examples, though admittedly it is achieved at a very different recall level. The estimation of the model with our approach takes around one hour on a standard desktop PC, which is comparable to 40 minutes required to train the semi-supervised baseline. In these experiments, we consider the problem of predicting alignment between text and the corresponding observable world state. The direct evaluation of the meaning recognition (i.e. semantic parsing) accuracy is not possible on this dataset, as the data does not contain information which fields are discussed. Even if it would provalue top words 0-25 clear, small, cloudy, gaps, sun 25-50 clouds, increasing, heavy, produce, could 50-75 cloudy, mostly, high, cloudiness, breezy 75-100 amounts, rainfall, inch, new, possibly Table 2: Top 5 words in the word distribution for field mode of record sky cover, function words and punctuation are omitted. vide this information, the documents do not verbalize the state at the necessary granularity level to predict the field values. For example, it is not possible to decide to which bucket of the field sky cover the expression ‘cloudy’ refers to, as it has a relatively uniform distribution across 3 (out of 4) buckets. The problem of predicting text-meaning alignments is interesting in itself, as the extracted alignments can be used in training of a statistical generation system or information extractors, but we also believe that evaluation on this problem is an appropriate test for the relative comparison of the semantic analyzers’ performance. Additionally, note that the success of our weaklysupervised scenario indirectly suggests that the model is sufficiently accurate in predicting semantics of an unlabeled text, as otherwise there would be no useful information passed in between semantically overlapping documents during learning and, consequently, no improvement from sharing the state.8 To confirm that the model trained by our approach indeed assigns new words to correct fields and records, we visualize top words for the field characterizing sky cover (table 2). Note that the words “sun”, “cloudiness” or “gaps” were not appearing in the labeled part of the data, but seem to be assigned to correct categories. However, correlation between rain and overcast, as also noted in (Liang et al., 2009), results in the wrong assignment of the rain-related words to the field value corresponding to very cloudy weather. 5 Related Work Probably the most relevant prior work is an approach to bootstrapping lexical choice of a generation system using a corpus of alternative pas8We conducted preliminary experiments on synthetic data generated from a random semantic-correspondence model. Our approach outperformed the baselines both in predicting ‘text’-state correspondence and in the F1 score on the predicted set of field assignments (‘text meanings’). 965 sages (Barzilay and Lee, 2002), however, in their work all the passages were annotated with unaligned semantic expressions. Also, they assumed that the passages are paraphrases of each other, which is stronger than our non-contradiction assumption. Sentence and text alignment has also been considered in the related context of paraphrase extraction (see, e.g., (Dolan et al., 2004; Barzilay and Lee, 2003)) but this prior work did not focus on inducing or learning semantic representations. Similarly, in information extraction, there have been approaches for pattern discovery using comparable monolingual corpora (Shinyama and Sekine, 2003) but they generally focused only on discovery of a single pattern from a pair of sentences or texts. Radev (2000) considered types of potential relations between documents, including contradiction, and studied how this information can be exploited in NLP. However, this work considered primarily multi-document summarization and question answering problems. Another related line of research in machine learning is clustering or classification with constraints (Basu et al., 2004), where supervision is given in the form of constraints. Constraints declare which pairs of instances are required to be assigned to the same class (or required to be assigned to different classes). However, we are not aware of any previous work that generalized these methods to structured prediction problems, as trivial equality/inequality constraints are probably too restrictive, and a notion of consistency is required instead. 6 Summary and Future Work In this work we studied the use of weak supervision in the form of non-contradictory relations between documents in learning semantic representations. We argued that this type of supervision encodes information which is hard to discover in an unsupervised way. However, exact inference for groups of documents with overlapping semantic representation is generally prohibitively expensive, as the shared latent semantics introduces nonlocal dependences between semantic representations of individual documents. To combat it, we proposed a simple iterative inference algorithm. We showed how it can be instantiated for the semantics-text correspondence model (Liang et al., 2009) and evaluated it on a dataset of weather forecasts. Our approach resulted in an improvement over the scores of both the supervised baseline and of the traditional semi-supervised learning. There are many directions we plan on investigating in the future for the problem of learning semantics with non-contradictory relations. A promising and challenging possibility is to consider models which induce full semantic representations of meaning. Another direction would be to investigate purely unsupervised set-up, though it would make evaluation of the resulting method much more complex. One potential alternative would be to replace the initial supervision with a set of posterior constraints (Graca et al., 2008) or generalized expectation criteria (McCallum et al., 2007). Acknowledgements The authors acknowledge the support of the Excellence Cluster on Multimodal Computing and Interaction (MMCI). Thanks to Alexandre Klementiev, Alexander Koller, Manfred Pinkal, Dan Roth, Caroline Sporleder and the anonymous reviewers for their suggestions, and to Percy Liang for answering questions about his model. References Regina Barzilay and Lillian Lee. 2002. Bootstrapping lexical choice via multiple-sequence alignment. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 164–171. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiple-sequence alignment. In Proceedings of the Conference on Human Language Technology and North American chapter of the Association for Computational Linguistics (HLT-NAACL). Sugatu Basu, Arindam Banjeree, and Raymond Mooney. 2004. Active semi-supervision for pairwise constrained clustering. In Proc. of the SIAM International Conference on Data Mining (SDM), pages 333–344. A. Blum and T. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT: Proceedings of the Workshop on Computational Learning Theory, Morgan Kaufmann Publishers, pages 209–214. Xavier Carreras and Lluis Marquez. 2005. Introduction to the conll-2005 shared task: Semantic role labeling. In Proceedings of CoNLL-2005, Ann Arbor, MI USA. 966 David L. Chen and Raymond L. Mooney. 2008. Learning to sportcast: A test of grounded language acquisition. In Proc. of International Conference on Machine Learning, pages 128–135. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithms. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38. P. Diaconis and B. Efron. 1983. Computer-intensive methods in statistics. Scientific American, pages 116–130. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the Conference on Computational Linguistics (COLING), pages 350–356. Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CONLL-05), Ann Arbor, Michigan. Joao Graca, Kuzman Ganchev, and Ben Taskar. 2008. Expectation maximization and posterior constraints. Advances in Neural Information Processing Systems 20 (NIPS). Zellig Harris. 1968. Mathematical structures of language. Wiley. Rohit J. Kate and Raymond J. Mooney. 2007. Learning language semantics from ambigous supervision. In Association for the Advancement of Artificial Intelligence (AAAI), pages 895–900. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proc. of the Annual Meeting of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). Andrew McCallum, Gideon Mann, and Gregory Druck. 2007. Generalized expectation criteria. Technical Report TR 2007-60, University of Massachusetts, Amherst, MA. Raymond J. Mooney. 2007. Learning for semantic parsing. In Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Text Processing, pages 982–991. Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. 1999. Loopy belief propagation for approximate inference: An empirical study. In Proc. of Uncertainty in Artificial Intelligence (UAI), pages 467–475. Judea Pearl. 1982. Reverend bayes on inference engines: A distributed hierarchical approach. In Proc. of the National Conference on Artificial Intelligence (AAAI), pages 133–136. Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, (EMNLP-09). Dragomir Radev. 2000. A common theory of information fusion from multiple text sources step one: Cross-document structure. In 1st SIGdial Workshop on Discourse and Dialogue, pages 74–83. Yusuke Shinyama and Satoshi Sekine. 2003. Paraphrase acquisition for information extraction. In Proceedings of Second International Workshop on Paraphrasing (IWP2003), pages 65–71. Benjamin Snyder and Regina Barzilay. 2007. Database-text alignment via structured multilabel classification. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI-05), pages 1713–1718. J. Weeds and W. Weir. 2005. Co-occurrence retrieval: A flexible framework for lexical distributional similarity. Computational Linguistics, 31(4):439–475. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammar. In Proceedings of the Twenty-first Conference on Uncertainty in Artificial Intelligence, Edinburgh, UK, August. 967
2010
98
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 968–978, Uppsala, Sweden, 11-16 July 2010. c⃝2010 Association for Computational Linguistics Open-Domain Semantic Role Labeling by Modeling Word Spans Fei Huang Temple University 1805 N. Broad St. Wachman Hall 318 [email protected] Alexander Yates Temple University 1805 N. Broad St. Wachman Hall 303A [email protected] Abstract Most supervised language processing systems show a significant drop-off in performance when they are tested on text that comes from a domain significantly different from the domain of the training data. Semantic role labeling techniques are typically trained on newswire text, and in tests their performance on fiction is as much as 19% worse than their performance on newswire text. We investigate techniques for building open-domain semantic role labeling systems that approach the ideal of a train-once, use-anywhere system. We leverage recently-developed techniques for learning representations of text using latent-variable language models, and extend these techniques to ones that provide the kinds of features that are useful for semantic role labeling. In experiments, our novel system reduces error by 16% relative to the previous state of the art on out-of-domain text. 1 Introduction In recent semantic role labeling (SRL) competitions such as the shared tasks of CoNLL 2005 and CoNLL 2008, supervised SRL systems have been trained on newswire text, and then tested on both an in-domain test set (Wall Street Journal text) and an out-of-domain test set (fiction). All systems tested on these datasets to date have exhibited a significant drop-off in performance on the out-of-domain tests, often performing 15% worse or more on the fiction test sets. Yet the baseline from CoNLL 2005 suggests that the fiction texts are actually easier than the newswire texts. Such observations expose a weakness of current supervised natural language processing (NLP) technology for SRL: systems learn to identify semantic roles for the subset of language contained in the training data, but are not yet good at generalizing to language that has not been seen before. We aim to build an open-domain supervised SRL system; that is, one whose performance on out-of-domain tests approaches the same level of performance as that of state-of-the-art systems on in-domain tests. Importantly, an open-domain system must not use any new labeled data beyond what is included in the original training text when running on a new domain. This allows the system to be ported to any new domain without any manual effort. In particular, it ought to apply to arbitrary Web documents, which are drawn from a huge variety of domains. Recent theoretical and empirical evidence suggests that the fault for poor performance on out-ofdomain tests lies with the representations, or sets of features, traditionally used in supervised NLP. Building on recent efforts in domain adaptation, we develop unsupervised techniques for learning new representations of text. Using latent-variable language models, we learn representations of texts that provide novel kinds of features to our supervised learning algorithms. Similar representations have proven useful in domain-adaptation for part-of-speech tagging and phrase chunking (Huang and Yates, 2009). We demonstrate how to learn representations that are effective for SRL. Experiments on out-of-domain test sets show that our learned representations can dramatically improve out-of-domain performance, and narrow the gap between in-domain and out-of-domain performance by half. The next section provides background information on learning representations for NLP tasks using latent-variable language models. Section 3 presents our experimental setup for testing opendomain SRL. Sections 4, 5, 6 describe our SRL system: first, how we identify predicates in opendomain text, then how our baseline technique 968 identifies and classifies arguments, and finally how we learn representations for improving argument identification and classification on out-of-domain text. Section 7 presents previous work, and Section 8 concludes and outlines directions for future work. 2 Open-Domain Representations Using Latent-Variable Language Models Let X be an instance set for a learning problem; for SRL, this is the set of all (sentence,predicate) pairs. Let Y be the space of possible labels for an instance, and let f: X →Y be the target function to be learned. A representation is a function R: X →Z, for some suitable feature space Z (such as Rd). A domain is defined as a distribution D over the instance set X. An opendomain system observes a set of training examples (R(x), f(x)), where instances x ∈X are drawn from a source domain, to learn a hypothesis for classifying examples drawn from a separate target domain. Previous work by Ben-David et al. (2007; 2009) uses Vapnik-Chervonenkis (VC) theory to prove theoretical bounds on an open-domain learning machine’s performance. Their analysis shows that the choice of representation is crucial to opendomain learning. As is customary in VC theory, a good choice of representation must allow a learning machine to achieve low error rates during training. Just as important, however, is that the representation must simultaneously make the source and target domains look as similar to one another as possible. For open-domain SRL, then, the traditional representations are problematic. Typical representations in SRL and NLP use features of the local context to produce a representation. For instance, one dimension of a traditional representation R might be +1 if the instance contains the word “bank” as the head of a noun-phrase chunk that occurs before the predicate in the sentence, and 0 otherwise. Although many previous studies have shown that these features allow learning systems to achieve impressively low error rates during training, they also make texts from different domains look very dissimilar. For instance, a feature based on the word “bank” or “CEO” may be common in a domain of newswire text, but scarce or nonexistent in, say, biomedical literature. In our recent work (Huang and Yates, 2009) we show how to build systems that learn new representations for open-domain NLP using latentvariable language models like Hidden Markov Models (HMMs). An HMM is a generative probabilistic model that generates each word xi in the corpus conditioned on a latent variable Yi. Each Yi in the model takes on integral values from 1 to K, and each one is generated by the latent variable for the preceding word, Yi−1. The distribution for a corpus x = (x1, . . . , xN) and a set of state vectors s = (s1, . . . , sN) is given by: P(x, s) = Y i P(xi|si)P(si|si−1) Using Expectation-Maximization (Dempster et al., 1977), it is possible to estimate the distributions for P(xi|si) and P(si|si−1) from unlabeled data. The Viterbi algorithm (Rabiner, 1989) can then be used to produce the optimal sequence of latent states si for a given instance x. The output of this process is an integer (ranging from 1 to K) for every word xi in the corpus. We use the integer value of si as a new feature for every xi in the sentence. In POS-tagging and chunking experiments, these learned representations have proven to meet both of Ben-David et al.’s criteria for open-domain representations: first, they are useful in making predictions on the training text because the HMM latent states categorize tokens according to distributional similarity. And second, it would be difficult to tell two domains apart based on their HMM labels, since the same HMM state can generate similar words from a variety of domains. In what follows, we adapt these representationlearning concepts to open-domain SRL. 3 Experimental Setup We test our open-domain semantic role labeling system using data from the CoNLL 2005 shared task (Carreras and M`arquez, 2005). We use the standard training set, consisting of sections 02-21 of the Wall Street Journal (WSJ) portion of the Penn Treebank, labeled with PropBank (Palmer et al., 2005) annotations for predicates and arguments. We perform our tests on the Brown corpus (Kucera and Francis, 1967) test data from CoNLL 2005, consisting of 3 sections (ck01-ck03) of propbanked Brown corpus data. This test set consists of 426 sentences containing 7,159 tokens, 804 propositions, and 2,177 arguments. While the 969 training data contains newswire text, the test sentences are drawn from the domain of “general fiction,” and contain an entirely different style (or styles) of English. The data also includes a second test set of in-domain text (section 23 of the Treebank), which we refer to as the WSJ test set and use as a reference point. Every sentence in the dataset is automatically annotated with a number of NLP pipeline systems, including part-of-speech (POS) tags, phrase chunk labels (Carreras and M`arquez, 2003), namedentity tags, and full parse information by multiple parsers. These pipeline systems are important for generating features for SRL, and one key reason for the poor performance of SRL systems on the Brown corpus is that the pipeline systems themselves perform worse. The Charniak parser, for instance, drops from an F1 of 88.25 on the WSJ test to a F1 of 80.84 on the Brown corpus. For the chunker and POS tagger, the drop-offs are less severe: 94.89 to 91.73, and 97.36 to 94.73. Toutanova et al. (2008) currently have the bestperforming SRL system on the Brown corpus test set with an F1 score of 68.81 (80.8 for the WSJ test). They use a discriminative reranking approach to jointly predict the best set of argument boundaries and the best set of argument labels for a predicate. Like the best systems from the CoNLL 2005 shared task (Punyakanok et al., 2008; Pradhan et al., 2005), they also use features from multiple parses to remain robust in the face of parser error. Owing to the established difficulty of the Brown test set and the different domains of the Brown test and WSJ training data, this dataset makes for an excellent testbed for open-domain semantic role labeling. 4 Predicate Identification In order to perform true open-domain SRL, we must first consider a task which is not formally part of the CoNLL shared task: the task of identifying predicates in a given sentence. While this task is almost trivial in the WSJ test set, where all but two out of over 5000 predicates can be observed in the training data, it is significantly more difficult in an open-domain setting. In the Brown test set, 6.1% of the predicates do not appear in the training data, and 11.8% of the predicates appear at most twice in the training data (c.f. 1.5% of the WSJ test predicates that appear at most twice in training). In addition, many words which appear Baseline HMM Freq P R F1 P R F1 0 89.1 80.4 84.5 93.5 84.3 88.7 0-2 87.4 84.7 86.0 91.6 88.8 90.2 all 87.8 92.5 90.1 90.8 96.3 93.5 Table 1: Using HMM features in predicate identification reduces error in out-of-domain tests by 34.3% overall, and by 27.1% for OOV predicates. “Freq” refers to frequency in the training data. There were 831 predicates in total; 51 never appeared in training and 98 appeared at most twice. as predicates in training may not be predicates in the test set. In an open-domain setting, therefore, we cannot rely solely on a catalog of predicates from the training data. To address the task of open-domain predicate identification, we construct a Conditional Random Field (CRF) (Lafferty et al., 2001) model with target labels of B-Pred, I-Pred, and O-Pred (for the beginning, interior, and outside of a predicate). We use an open source CRF software package to implement our CRF models.1 We use words, POS tags, chunk labels, and the predicate label at the preceding and following nodes as features for our Baseline system. To learn an open-domain representation, we then trained an 80 state HMM on the unlabeled texts of the training and Brown test data, and used the Viterbi optimum states of each word as categorical features. The results of our Baseline and HMM systems appear in Table 1. For predicates that never or rarely appear in training, the HMM features increase F1 by 4.2, and they increase the overall F1 of the system by 3.5 to 93.5, which approaches the F1 of 94.7 that the Baseline system achieves on the in-domain WSJ test set. Based on these results, we were satisfied that our system could find predicates in open-domain text. In all subsequent experiments, we fall back on the standard evaluation in which it is assumed that the boundaries of the predicate are given. This allows us to compare with previous work. 5 Semantic Role Labeling with HMM-based Representations Following standard practice, we divide the SRL task into two parts: argument identification and 1Available from http://sourceforge.net/projects/crf/ 970 argument classification. We treat both sub-tasks as sequence-labeling problems. During argument identification, the system must label each token with labels that indicate either the beginning or interior of an argument (B-Arg or I-Arg), or a label that indicates the token is not part of an argument (O-Arg). During argument classification, the system labels each token that is part of an argument with a class label, such as Arg0 or ArgM. Following argument classification, multi-word arguments may have different classification labels for each token. We post-process the labels by changing them to match the label of the first token. We use CRFs as our models for both tasks (Cohn and Blunsom, 2005). Most previous approaches to SRL have relied heavily on parsers, and especially constituency parsers. Indeed, when SRL systems use gold standard parses, they tend to perform extremely well (Toutanova et al., 2008). However, as several previous studies have noted (Gildea, 2001; Pradhan et al., 2007), using parsers can cause problems for open-domain SRL. The parsers themselves may not port well to new domains, or the features they generate for SRL may not be stable across domains, and therefore may cause sparse data problems on new domains. Our first step is therefore to build an SRL system that relies on partial parsing, as was done in CoNLL 2004 (Carreras and M`arquez, 2004). We then gradually add in lesssparse alternatives for the syntactic features that previous systems derive from parse trees. During argument identification we use the features below to predict the label Ai for token wi: • words: wi, wi−1, and wi+1 • parts of speech (POS): POS tags ti, ti−1, and ti+1 • chunk labels: (e.g., B-NP, I-VP, or O) chunk tags ci, ci−1, and ci+1 • combinations: citi, tiwi, citiwi • NE: the named entity type ni of wi • position: whether the word occurs before or after the predicate • distance: the number of intervening tokens between wi and the target predicate • POS before, after predicate: the POS tag of the tokens immediately preceding and following the predicate • Chunk before, after predicate: the chunk type of the tokens immediately preceding and following the predicate • Transition: for prediction node Ai, we use Ai−1and Ai+1 as features For argument classification, we add the features below to those listed above: • arg ID: the labels Ai produced by arg. identification (B-Arg, I-Arg, or O) • combination: predicate + first argument word, predicate+ last argument word, predicate + first argument POS, predicate + last argument POS • head distance: the number of tokens between the first token of the argument phrase and the target predicate • neighbors: the words immediately before and after the argument. We refer to the CRF model with these features as our Baseline SRL system; in what follows we extend the Baseline model with more sophisticated features. 5.1 Incorporating HMM-based Representations As a first step towards an open-domain representation, we use an HMM with 80 latent state values, trained on the unlabeled text of the training and test sets, to produce Viterbi-optimal state values si for every token in the corpus. We then add the following features to our CRFs for both argument identification and classification: • HMM states: HMM state values si, si−1, and si+1 • HMM states before, after predicate: the state value of the tokens immediately preceding and following the predicate We call the resulting model our Baseline+HMM system. 5.2 Path Features Despite all of the features above, the SRL system has very little information to help it determine the syntactic relationship between a target predicate and a potential argument. For instance, these baseline features provide only crude distance information to distinguish between multiple arguments that follow a predicate, and they make it difficult to correctly identify clause arguments or arguments that appear far from the predicate. Our system needs features that can help distinguish between different syntactic relationships, without being overly sensitive to the domain. As a step in this direction, we introduce path features: features for the sequence of tokens be971 System P R F1 Baseline 63.9 59.7 61.7 Baseline+HMM 68.5 62.7 65.5 Baseline+HMM+Paths 70.0 65.6 67.7 Toutanova et al. (2008) NR NR 68.8 Table 2: Na¨ıve path features improve our baseline, but not enough to match the state-of-the-art. Toutanova et al. do not report (NR) separate values for precision and recall on this dataset. Differences in both precision and recall between the baseline and the other systems are statistically significant at p < 0.01 using the two-tailed Fisher’s exact test. tween a predicate and a potential argument. In standard SRL systems, these path features usually consist of a sequence of constituent parse nodes representing the shortest path through the parse tree between a word and the predicate (Gildea and Jurafsky, 2002). We substitute paths that do not depend on parse trees. We use four types of paths: word paths, POS paths, chunk paths, and HMM state paths. Given an input sentence labeled with POS tags, and chunks, we construct path features for a token wi by concatenating words (or tags or chunk labels) between wi and the predicate. For example, in the sentence “The HIV infection rate is expected to peak in 2010,” the word path between “rate” and predicate “peak” would be “is expected to”, and the POS path would be “VBZ VBD TO.” Since word, POS, and chunk paths are all subject to data sparsity for arguments that are far from the predicate, we build less-sparse path features by using paths of HMM states. If we use a reasonable number of HMM states, each category label is much more common in the training data than the average word, and paths containing the HMM states should be much less sparse than word paths, and even chunk paths. In our experiments, we use 80-state HMMs. We call the result of adding path features to our feature set the Baseline+HMM+Paths system((BL). Table 2 shows the performance of our three baseline systems. In this open-domain SRL experiment, path features improve over the Baseline’s F1 by 6 points, and by 2.2 points over Baseline+HMM, although the improvement is not enough to match the state-of-the-art system by Toutanova et al. Y1 Y2 Y6 The is expected to peak in 2010 Y3 Y4 Y5 Y7 Y8 HIV infection rate Figure 1: The Span-HMM over the sentence. It shows the span of length 3. 6 Representations for Word Spans Despite partial success in improving our baseline SRL system with path features, these features still suffer from data sparsity — many paths in the test set are never or very rarely observed during training, so the CRF model has little or no data points from which to estimate accurate parameters for these features. In response, we introduce latent variable models of word spans, or sequences of words. As with the HMM models above, the latent states for word spans can be thought of as probabilistic categories for the spans. And like the HMM models, we can turn the word span models into representations by using the state value for a span as a feature in our supervised SRL system. Unlike path features, the features from our models of word spans consist of a single latent state value rather than a concatenation of state values, and as a consequence they tend to be much less sparse in the training data. 6.1 Span-HMM Representations We build our latent-variable models of word spans using variations of Hidden Markov Models, which we call Span-HMMs. Figure 1 shows a graphical model of a Span-HMM. Each Span-HMM behaves just like a regular HMM, except that it includes one node, called a span node, that can generate an entire span rather than a single word. For instance, in the Span-HMM of Figure 1, node y5 is a span node that generates a span of length 3: “is expected to.” Span-HMMs can be used to provide a single categorical value for any span of a sentence using the usual Viterbi algorithm for HMMs. That is, at test time, we generate a Span-HMM feature for word wj by constructing a Span-HMM that has a span node for the sequence of words between wj and the predicate. We determine the Viterbi optimal state of this span node, and use that state as the value of the new feature. In our example in Figure 1, the value of span node y5 is used as a feature for 972 the token “rate”, since y5 generates the sequence of words between “rate” and the predicate “peak.” Notice that by using Span-HMMs to provide these features, we have condensed all paths in our data into a small number of categorical values. Whereas there are a huge number of variations to the spans themselves, we can constrain the number of categories for the Span-HMM states to a reasonable number such that each category is likely to appear often in the training data. The value of each Span-HMM state then represents a cluster of spans with similar delimiting words; some clusters will correlate with spans between predicates and arguments, and others with spans that do not connect predicates and arguments. As a result, Span-HMM features are not sparse, and they correlate with the target function, making them useful in learning an SRL model. 6.2 Parameter Estimation We use a variant of the Baum-Welch algorithm to train our Span-HMMs on unlabeled text. In order for this to work, we need to provide Baum-Welch with a modified view of the data so that span nodes can generate multiple consecutive words in a sentence. First, we take every sentence S in our training data and generate the set Spans(S) of all valid spans in the sentence. For efficiency’s sake, we use only spans of length less than 15; approximately 95% of the arguments in our dataset were within 15 words of the predicate, so even with this restriction we are able to supply features for nearly all valid arguments. The second step of our training procedure is to create a separate data point for each span of S. For each span t ∈Spans(S), we construct a Span-HMM with a regular node generating each element of S, except that a span node generates all of t. Thus, our training data contains many different copies of each sentence S, with a different Span-HMM generating each copy. Intuitively, running Baum-Welch over this data means that a span node with state k will be likely to generate two spans t1 and t2 if t1 and t2 tend to appear in similar contexts. That is, they should appear between words that are also likely to be generated by the same latent state. Thus, certain values of k will tend to appear for spans between predicates and arguments, and others will tend to appear between predicates and non-arguments. This makes the value k informative for both argument identification and argument classification. 6.3 Memory Considerations Memory usage is a major issue for our SpanHMM models. We represent emission distributions as multinomials over discrete observations. Since there are millions of different spans in our data, a straightforward implementation would require millions of parameters for each latent state of the Span-HMM. We use two related techniques to get around this problem. In both cases, we use a second HMM model, which we call the base HMM to distinguish from our Span-HMM, to back-off from the explicit word sequence. We use the largest number of states for HMMs that can be fit into memory. Let S be a sentence, and let ˆs be the sequence of optimal latent state values for S produced by our base HMM. Our first approach trains the SpanHMM on Spans(ˆs), rather than Spans(S). If we use a small enough number of latent states in the base HMM (in experiments, we use 10 latent states), we drastically reduce the number of different spans in the data set, and therefore the number of parameters required for our model. We call this representation Span-HMM-Base10. As with our other HMM-based models, we use the largest number of latent states that will allow the resulting model to fit in our machine’s memory — our previous experiments on representations for partof-speech tagging suggest that more latent states are usually better. While our first technique solves the memory issue, it also loses some of the power of our original Span-HMM model by using a very coarsegrained base HMM clustering of the text into 10 categories. Our second approach trains a separate Span-HMM model for spans of different lengths. Since we need only one model in memory at a time, this allows each one to consume more memory. We therefore use base HMM models with more latent states (up to 20) to annotate our sentences, and then train on the resulting Spans(ˆs) as before. With this technique, we produce features that are combinations of the state value for span nodes and the length of the span, in order to indicate which of our Span-HMM models the state value came from. We call this representation Span-HMM-BaseByLength. 6.4 Combining Multiple Span-HMMs So far, our Span-HMM models produce one new feature for every token during argument identifi973 System P R F1 Baseline+HMM+Paths 70.0 65.6 67.7 Toutanova et al. NR NR 68.8 Span-HMM-Base10 74.5 69.3 71.8 Span-HMM-BaseByLength 76.3 70.2 73.1 Multi-Span-HMM 77.0 70.9 73.8 Table 3: Span-HMM features significantly improve over state-of-the-art results in out-ofdomain SRL. Differences in both precision and recall between the baseline and the Span-HMM systems are statistically significant at p < 0.01 using the two-tailed Fisher’s exact test. cation and classification. While these new features may be very helpful, ideally we would like our learned representations to produce multiple useful features for the CRF model, so that the CRF can combine the signals from each feature to learn a sophisticated model. Towards this goal, we train N independent versions of our SpanHMM-BaseByLength models, each with a random initialization for the Baum-Welch algorithm. Since Baum-Welch is a hill-climbing algorithm, it should find local, but not necessarily global, optima for the parameters of each Span-HMMBaseByLength model. When we decode each of the models on training and test texts, we will obtain N different sequences of latent states, one for each locally-optimized model. Thus we obtain N different, independent sources of features. We call the CRF model with these N Span-HMM features the Multi-Span-HMM model(MSH); in experiments we use N = 5. 6.5 Results and Discussion Results for the Span-HMM models on the CoNLL 2005 Brown corpus are shown in Table 3. All three versions of the Span-HMM outperform Toutanova et al.’s system on the Brown corpus, with the Multi-Span-HMM gaining 5 points in F1. The Multi-Span-HMM model improves over the Baseline+HMM+Paths model by 7 points in precision, and 5.3 points in recall. Among the Span-HMM models, the use of more states in the Span-HMMBaseByLength model evidently outweighed the cost of splitting the model into separate versions for different length spans. Using multiple independent copies of the Span-HMMs provides a small (0.7) gain in precision and recall. Differences among the different Span-HMM models System WSJ Brown Diff Multi-Span-HMM 79.2 73.8 5.4 Toutanova et al. (2008) 80.8 68.8 12.0 Pradhan et al. (2005) 78.6 68.4 10.2 Punyakanok et al. (2008) 79.4 67.8 11.6 Table 4: Multi-Span-HMM has a much smaller drop-off in F1 than comparable systems on outof-domain test data vs in-domain test data. were not statistically significant, except that the difference in precision between the Multi-SpanHMM and the Span-HMM-Base10 is significant at p < .1. Table 4 shows the performance drop-off for top SRL systems when applied to WSJ test data and Brown corpus test data. The Multi-Span-HMM model performs near the state-of-the-art on the WSJ test set, and its F1 on out-of-domain data drops only about half as much as comparable systems. Note that several of the techniques used by other systems, such as using features from kbest parses or jointly modeling the dependencies among arguments, are complementary to our techniques, and may boost the performance of our system further. Table 5 breaks our results down by argument type. Most of our improvement over the Baseline system comes from the core arguments A0 and A1, but also from a few adjunct types like AMTMP and AM-LOC. Figure 2 shows that when the argument is close to the predicate, both systems perform well, but as the distance from the predicate grows, our Multi-Span-HMM system is better able to identify and classify arguments than the Baseline+HMM+Paths system. Table 6 provides results for argument identification and classification separately. As Pradhan et al.previously showed (Pradhan et al., 2007), SRL systems tend to have an easier time with porting argument identification to new domains, but are less strong at argument classification on new domains. Our baseline system decreases in F-score from 81.5 to 78.9 for argument identification, but suffers a much larger 8% drop in argument classification. The Multi-Span-HMM model improves over the Baseline in both tasks and on both test sets, but the largest improvement (6%) is in argument classification on the Brown test set. To help explain the success of the Span-HMM techniques, we measured the sparsity of our path 974 Overall A0 A1 A2 A3 A4 ADV DIR DIS LOC MNR MOD NEG PNC TMP R-A0 R-A1 Num 2177 566 676 147 12 15 143 53 22 85 110 91 50 17 112 25 21 BL 67.7 76.2 70.6 64.8 59.0 71.2 52.7 54.8 71.9 67.5 58.3 90.9 90.0 50.0 76.5 76.5 71.3 MSH 73.8 82.5 73.6 63.9 60.3 73.3 50.8 52.9 70.0 70.3 52.7 94.2 92.9 51.6 81.6 84.4 75.7 Table 5: SRL results (F1) on the Brown test corpus broken down by role type. BL is the Baseline+HMM+Paths model, MSH is the Multi-Span-HMM model. Column 8 to 16 are all adjuncts (AM-). We omit roles with ten or fewer examples. 50 55 60 65 70 75 80 85 90 F1 score Words between predicate and argument MSH BL Figure 2: The Multi-Span-HMM (MSH) model is better able to identify and classify arguments that are far from the predicate than the Baseline+HMM+Paths (BL) model. Test Id.F1 Accuracy BL WSJ 81.5 93.7 Brown 78.9 85.8 MSH WSJ 83.9 94.4 Brown 80.3 91.9 Table 6: Baseline (BL) and Multi-Span-HMM (MSH) performance on argument identification (Id.F1) and argument classification. and Span-HMM features. Figure 3 shows the percentage of feature values in the Brown corpus that appear more than twice, exactly twice, or exactly once in the training data. While word path features can be highly valuable when there is training data available for them, only about 11% of the word paths in the Brown test set also appeared at all in the training data. POS and chunk paths fared a bit better (22% and 33% respectively), but even then nearly 70% of all feature values had no available training data. HMM and Span-HMM-Base10 paths achieved far better success in this respect. Importantly, the improvement is mostly due to features that are seen often in training, rather than features that were seen just once or twice. Thus Span0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Fraction of Feature Values in Brown Corpus Occurs 1x in WSJ Occurs 2x in WSJ Occurs 3x or more in WSJ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Fraction of Feature Values in Brown Corpus Occurs 1x in WSJ Occurs 2x in WSJ Occurs 3x or more in WSJ Figure 3: HMM path and Span-HMM features are far more likely to appear often in training data than the word, POS, and chunk path features. Over 70% of Span-HMM-Base10 features in the Brown corpus appear at least three times during training; in contrast, fewer than 33% of chunk path features in the Brown corpus appear at all during training. HMMs derive their power as representations for open-domain SRL from the fact that they provide features that are mostly the same across domains; 80% of the features of our Span-HMM-Base10 in the Brown corpus were observed at least once in the training data. Table 7 shows examples of spans that were clustered into the same Span-HMM state, along with word to either side. All four examples are cases where the Span-HMM-Base10 model correctly tagged the following argument, but the Baseline+HMM+Paths model did not. We can see that the paths of these four examples are completely different, but the words surrounding them are very similar. The emission from a span node are very sparse, so the Span-HMM has unsurprisingly learned to cluster spans according to the HMM states that precede and follow the span node. This is by design, as this kind of distributional clustering is helpful for identifying and classifying arguments. One potentially interesting 975 Predicate Span B-Arg picked the things up from passed through the barbed wire at come down from Sundays to sat over his second rock in Table 7: Example spans labeled with the same Span-HMM state. The examples are taken from sentences where the Span-HMM-Base10 model correctly identified the argument on the right, but the Baseline+HMM+Paths model did not. question for future work is whether a less sparse model of the spans themselves, such as a Na¨ıve Bayes model for the span node, would yield a better clustering for producing features for semantic role labeling. 7 Previous Work Deschact and Moens (2009) use a latent-variable language model to provide features for an SRL system, and they show on CoNLL 2008 data that they can significantly improve performance when little labeled training data is available. They do not report on out-of-domain tests. They use HMM language models trained on unlabeled text, much like we use in our baseline systems, but they do not consider models of word spans, which we found to be most beneficial. Downey et al. (2007b) also incorporate HMM-based representations into a system for the related task of Web information extraction, and are able to show that the system improves performance on rare terms. F¨urstenau and Lapata (2009b; 2009a) use semisupervised techniques to automatically annotate data for previously unseen predicates with semantic role information. This task differs from ours in that it focuses on previously unseen predicates, which may or may not be part of text from a new domain. Their techniques also result in relatively lower performance (F1 between 15 and 25), although their tests are on a more difficult and very different corpus. Weston et al. (2008) use deep learning techniques based on semi-supervised embeddings to improve an SRL system, though their tests are on in-domain data. Unsupervised SRL systems (Swier and Stevenson, 2004; Grenager and Manning, 2006; Abend et al., 2009) can naturally be ported to new domains with little trouble, but their accuracy thus far falls short of state-ofthe-art supervised and semi-supervised systems. The disparity in performance between indomain and out-of-domain tests is by no means restricted to SRL. Past research in a variety of NLP tasks has shown that parsers (Gildea, 2001), chunkers (Huang and Yates, 2009), part-of-speech taggers (Blitzer et al., 2006), named-entity taggers (Downey et al., 2007a), and word sense disambiguation systems (Escudero et al., 2000) all suffer from a similar drop-off in performance on out-of-domain tests. Numerous domain adaptation techniques have been developed to address this problem, including self-training (McClosky et al., 2006) and instance weighting (Bacchiani et al., 2006) for parser adaptation and structural correspondence learning for POS tagging (Blitzer et al., 2006). Of these techniques, structural correspondence learning is closest to our technique in that it is a form of representation learning, but it does not learn features for word spans. None of these techniques have been successfully applied to SRL. 8 Conclusion and Future Work We have presented novel representation-learning techniques for building an open-domain SRL system. By incorporating learned features from HMMs and Span-HMMs trained on unlabeled text, our SRL system is able to correctly identify predicates in out-of-domain text with an F1 of 93.5, and it can identify and classify arguments to predicates with an F1 of 73.8, outperforming comparable state-of-the-art systems. Our successes so far on out-of-domain tests bring hope that supervised NLP systems may eventually achieve the ideal where they no longer need new manually-labeled training data for every new domain. There are several potential avenues for further progress towards this goal, including the development of more portable SRL pipeline systems, and especially parsers. Developing techniques that can incrementally adapt to new domains without the computational expense of retraining the CRF model every time would help make open-domain SRL more practical. Acknowledgments We wish to thank the anonymous reviewers for their helpful comments and suggestions. 976 References Omri Abend, Roi Reichart, and Ari Rappoport. 2009. Unsupervised argument identification for semantic role labeling. In Proceedings of the ACL. Michiel Bacchiani, Michael Riley, Brian Roark, and Richard Sproat. 2006. MAP adaptation of stochastic grammars. Computer Speech and Language, 20(1):41–68. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2007. Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems 20, Cambridge, MA. MIT Press. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jenn Wortman. 2009. A theory of learning from different domains. Machine Learning, (to appear). John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP. Xavier Carreras and Llu´ıs M`arquez. 2003. Phrase recognition by filtering and ranking with perceptrons. In Proceedings of RANLP-2003. Xavier Carreras and Llu´ıs M`arquez. 2004. Introduction to the CoNLL-2004 shared task: Semantic role labeling. In Proceedings of the Conference on Natural Language Learning (CoNLL). Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Proceedings of the Conference on Natural Language Learning (CoNLL). Trevor Cohn and Phil Blunsom. 2005. Semantic role labelling with tree conditional random fields. In Proceedings of CoNLL. Arthur Dempster, Nan Laird, and Donald Rubin. 1977. Likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Koen Deschacht and Marie-Francine Moens. 2009. Semi-supervised semantic role labeling using the latent words language model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). D. Downey, M. Broadhead, and O. Etzioni. 2007a. Locating complex named entities in web text. In Procs. of the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007). Doug Downey, Stefan Schoenmackers, and Oren Etzioni. 2007b. Sparse information extraction: Unsupervised language models to the rescue. In ACL. G. Escudero, L. M´arquez, and G. Rigau. 2000. An empirical study of the domain dependence of supervised word sense disambiguation systems. In EMNLP/VLC. Hagen F¨urstenau and Mirella Lapata. 2009a. Graph alignment for semi-supervised semantic role labeling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 11–20. Hagen F¨urstenau and Mirella Lapata. 2009b. Semisupervised semantic role labeling. In Proceedings of the 12th Conference of the European Chapter of the ACL, pages 220–228. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Daniel Gildea. 2001. Corpus Variation and Parser Performance. In Conference on Empirical Methods in Natural Language Processing. Trond Grenager and Christopher D Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Fei Huang and Alexander Yates. 2009. Distributional representations for handling sparsity in supervised sequence labeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. H. Kucera and W.N. Francis. 1967. Computational Analysis of Present-Day American English. Brown University Press. J. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pages 337–344. Martha Palmer, Dan Gildea, and Paul Kingsbury. 2005. The Proposition Bank: A corpus annotated with semantic roles. Computational Linguistics Journal, 31(1). Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Semantic role chunking combining complementary syntactic views. In Proc. of the Annual Conference on Computational Natural Language Learning (CoNLL). Sameer Pradhan, Wayne Ward, and James H. Martin. 2007. Towards robust semantic role labeling. In Proceedings of NAACL-HLT, pages 556–563. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2008. The importance of syntactic parsing and inference in semantic role labeling. Computational Linguistics, 34(2):257–287. 977 Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257– 285. Robert S. Swier and Suzanne Stevenson. 2004. Unsupervised semantic role labelling. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 95–102. Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics, 34(2):161–191. Jason Weston, Frederic Ratle, and Ronan Collobert. 2008. Deep learning via semi-supervised embedding. In Proceedings of the 25th International Conference on Machine Learning. 978
2010
99
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1–11, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Word-Class Approach to Labeling PSCFG Rules for Machine Translation Andreas Zollmann and Stephan Vogel Language Technologies Institute School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA {zollmann,vogel+}@cs.cmu.edu Abstract In this work we propose methods to label probabilistic synchronous context-free grammar (PSCFG) rules using only word tags, generated by either part-of-speech analysis or unsupervised word class induction. The proposals range from simple tag-combination schemes to a phrase clustering model that can incorporate an arbitrary number of features. Our models improve translation quality over the single generic label approach of Chiang (2005) and perform on par with the syntactically motivated approach from Zollmann and Venugopal (2006) on the NIST large Chineseto-English translation task. These results persist when using automatically learned word tags, suggesting broad applicability of our technique across diverse language pairs for which syntactic resources are not available. 1 Introduction The Probabilistic Synchronous Context Free Grammar (PSCFG) formalism suggests an intuitive approach to model the long-distance and lexically sensitive reordering phenomena that often occur across language pairs considered for statistical machine translation. As in monolingual parsing, nonterminal symbols in translation rules are used to generalize beyond purely lexical operations. Labels on these nonterminal symbols are often used to enforce syntactic constraints in the generation of bilingual sentences and imply conditional independence assumptions in the translation model. Several techniques have been recently proposed to automatically identify and estimate parameters for PSCFGs (or related synchronous grammars) from parallel corpora (Galley et al., 2004; Chiang, 2005; Zollmann and Venugopal, 2006; Liu et al., 2006; Marcu et al., 2006). While all of these techniques rely on wordalignments to suggest lexical relationships, they differ in the way in which they assign labels to nonterminal symbols of PSCFG rules. Chiang (2005) describes a procedure to extract PSCFG rules from word-aligned (Brown et al., 1993) corpora, where all nonterminals share the same generic label X. In Galley et al. (2004) and Marcu et al. (2006), target language parse trees are used to identify rules and label their nonterminal symbols, while Liu et al. (2006) use source language parse trees instead. Zollmann and Venugopal (2006) directly extend the rule extraction procedure from Chiang (2005) to heuristically label any phrase pair based on target language parse trees. Label-based approaches have resulted in improvements in translation quality over the single X label approach (Zollmann et al., 2008; Mi and Huang, 2008); however, all the works cited here rely on stochastic parsers that have been trained on manually created syntactic treebanks. These treebanks are difficult and expensive to produce and exist for a limited set of languages only. In this work, we propose a labeling approach that is based merely on part-of-speech analysis of the source or target language (or even both). Towards the ultimate goal of building end-to-end machine translation systems without any human annotations, we also experiment with automatically inferred word classes using distributional clustering (Kneser and Ney, 1993). Since the number of classes is a parameter of the clustering method and the resulting nonterminal size of our grammar is a function of the number of word classes, the PSCFG grammar complexity can be adjusted to the specific translation task at hand. Finally, we introduce a more flexible labeling approach based on K-means clustering, which allows 1 the incorporation of an arbitrary number of wordclass based features, including phrasal contexts, can make use of multiple tagging schemes, and also allows non-class features such as phrase sizes. 2 PSCFG-based translation In this work we experiment with PSCFGs that have been automatically learned from word-aligned parallel corpora. PSCFGs are defined by a source terminal set (source vocabulary) TS, a target terminal set (target vocabulary) TT , a shared nonterminal set N and rules of the form: A →⟨γ, α, w⟩where • A ∈N is a labeled nonterminal referred to as the left-hand-side of the rule, • γ ∈(N ∪TS)∗is the source side of the rule, • α ∈(N ∪TT )∗is the target side of the rule, • w ∈[0, ∞) is a non-negative real-valued weight assigned to the rule; in our model, w is the product of features φi raised to the power of weight λi. Chiang (2005) learns a single-nonterminal PSCFG from a bilingual corpus by first identifying initial phrase pairs using the technique from Koehn et al. (2003), and then performing a generalization operation to generate phrase pairs with gaps, which can be viewed as PSCFG rules with generic ‘X’ nonterminal left-hand-sides and substitution sites. Bilingual features φi that judge the quality of each rule are estimated based on rule extraction frequency counts. 3 Hard rule labeling from word classes We now describe a simple method of inducing a multi-nonterminal PSCFG from a parallel corpus with word-tagged target side sentences. The same procedure can straightforwardly be applied to a corpus with tagged source side sentences. We use the simple term ‘tag’ to stand for any kind of word-level analysis—a syntactic, statistical, or other means of grouping word types or tokens into classes, possibly based on their position and context in the sentence, POS tagging being the most obvious example. As in Chiang’s hierarchical system, we rely on an external phrase-extraction procedure such as the one of Koehn et al. (2003) to provide us with a set of phrase pairs for each sentence pair in the training corpus, annotated with their respective start and end positions in the source and target sentences. Let f = f1 · · · fm be the current source sentence, e = e1 · · · en the current target sentence, and t = t1 · · · tn its corresponding target tag sequence. We convert each extracted phrase pair, represented by its source span ⟨i, j⟩and target span ⟨k, ℓ⟩, into an initial rule tk-tℓ→fi · · · fj | ek · · · eℓ by assigning it a nonterminal “tk-tℓ” constructed by combining the tag of the target phrase’s left-most word with the tag of its right-most word. The creation of complex rules based on all initial rules obtained from the current sentence now proceeds just as in Chiang’s model. Consider the target-tagged example sentence pair: Ich habe ihn gesehen | I/PRP saw/VBD him/PRP Then (depending on the extracted phrase pairs), the resulting initial rules could be: 1: PRP-PRP →Ich | I 2: PRP-PRP →ihn | him 3: VBD-VBD →gesehen | saw 4: VBD-PRP →habe ihn gesehen | saw him 5: PRP-PRP →Ich habe ihn gesehen | I saw him Now, by abstracting-out initial rule 2 from initial rule 4, we obtain the complex rule: VBD-PRP →habe PRP-PRP1 gesehen | saw PRP-PRP1 Intuitively, the labeling of initial rules with tags marking the boundary of their target sides results in complex rules whose nonterminal occurrences impose weak syntactic constraints on the rules eligible for substitution in a PSCFG derivation: The left and right boundary word tags of the inserted rule’s target side have to match the respective boundary word tags of the phrase pair that was replaced by a nonterminal when the complex rule was created from a training sentence pair. Since consecutive words within a rule stem from consecutive words in the training corpus and thus are already consistent, the boundary word tags are more informative than tags of words between the boundaries for the task of combining different rules in a derivation, and are therefore a more appropriate choice for the creation of grammar labels than tags of inside words. Accounting for phrase size A drawback of the current approach is that a single-word rule such as PRP-PRP →Ich | I 2 can have the same left-hand-side nonterminal as a long rule with identical left and right boundary tags, such as (when using target-side tags): PRP-PRP →Ich habe ihn gesehen | I saw him We therefore introduce a means of distinguishing between one-word, two-word, and multiple-word phrases as follows: Each one-word phrase with tag T simply receives the label T, instead of T-T. Twoword phrases with tag sequence T1T2 are labeled T1-T2 as before. Phrases of length greater two with tag sequence T1 · · · Tn are labeled T1..Tn to denote that tags were omitted from the phrase’s tag sequence. The resulting number of grammar nonterminals based on a tag vocabulary of size t is thus given by 2t2 + t. An alternative way of accounting for phrase size is presented by Chiang et al. (2008), who introduce structural distortion features into a hierarchical phrase-based model, aimed at modeling nonterminal reordering given source span length. Our approach instead uses distinct grammar rules and labels to discriminate phrase size, with the advantage of enabling all translation models to estimate distinct weights for distinct size classes and avoiding the need of additional models in the log-linear framework; however, the increase in the number of labels and thus grammar rules decreases the reliability of estimated models for rare events due to increased data sparseness. Extension to a bilingually tagged corpus While the availability of syntactic annotations for both source and target language is unlikely in most translation scenarios, some form of word tags, be it partof-speech tags or learned word clusters (cf. Section 3) might be available on both sides. In this case, our grammar extraction procedure can be easily extended to impose both source and target constraints on the eligible substitutions simultaneously. Let Nf be the nonterminal label that would be assigned to a given initial rule when utilizing the source-side tag sequence, and Ne the assigned label according to the target-side tag sequence. Then our bilingual tag-based model assigns ‘Nf + Ne’ to the initial rule. The extraction of complex rules proceeds as before. The number of nonterminals in this model, based on a source tag vocabulary of size s and a target tag vocabulary of size t, is thus given by s2t2 for the regular labeling method and (2s2 + s)(2t2 + t) when accounting for phrase size. Consider again our example sentence pair (now also annotated with source-side part-of-speech tags): Ich/PRP habe/AUX ihn/PRP gesehen/VBN I/PRP saw/VBD him/PRP Given the same phrase extraction method as before, the resulting initial rules for our bilingual model, when also accounting for phrase size, are as follows: 1: PRP+PRP →Ich | I 2: PRP+PRP →ihn | him 3: VBN+VBD →gesehen | saw 4: AUX..VBN+VBD-PRP → habe ihn gesehen | saw him 5: PRP..VBN+PRP..PRP →Ich habe ihn gesehen | I saw him Abstracting-out rule 2 from rule 4, for instance, leads to the complex rule: AUX..VBN+VBD-PRP →habe PRP+PRP1 gesehen | saw PRP+PRP1 Unsupervised word class assignment by clustering As an alternative to POS tags, we experiment with unsupervised word clustering methods based on the exchange algorithm (Kneser and Ney, 1993). Its objective function is maximizing the likelihood n Y i=1 P(wi|w1, . . . , wi−1) of the training data w = w1, . . . , wn given a partially class-based bigram model of the form P(wi|w1, . . . , wi−1) ≈p(c(wi)|wi−1)·p(wi|c(wi)) where c : V →{1, . . . , N} maps a word (type, not token) w to its class c(w), V is the vocabulary, and N the fixed number of classes, which has to be chosen a priori. We use the publicly available implementation MKCLS (Och, 1999) to train this model. As training data we use the respective side of the parallel training data for the translation system. We also experiment with the extension of this model by Clark (2003), who incorporated morphological information by imposing a Bayesian prior on the class mapping c, based on N individual distributions over strings, one for each word class. Each such distribution is a character-based hidden Markov model, thus encouraging the grouping of morphologically similar words into the same class. 3 4 Clustering phrase pairs directly using the K-means algorithm Even though we have only made use of the first and last words’ classes in the labeling methods described so far, the number of resulting grammar nonterminals quickly explodes. Using a scheme based on source and target phrases with accounting for phrase size, with 36 word classes (the size of the Penn English POS tag set) for both languages, yields a grammar with (36+2∗362)2 = 6.9m nonterminal labels. Quite plausibly, phrase labeling should be informed by more than just the classes of the first and last words of the phrase. Taking phrase context into account, for example, can aid the learning of syntactic properties: a phrase beginning with a determiner and ending with a noun, with a verb as right context, is more likely to be a noun phrase than the same phrase with another noun as right context. In the current scheme, there is no way of distinguishing between these two cases. Similarly, it is conceivable that using non-boundary words inside the phrase might aid the labeling process. When relying on unsupervised learning of the word classes, we are forced to chose a fixed number of classes. A smaller number of word clusters will result in smaller number of grammar nonterminals, and thus more reliable feature estimation, while a larger number has the potential to discover more subtle syntactic properties. Using multiple word clusterings simultaneously, each based on a different number of classes, could turn this global, hard trade-off into a local, soft one, informed by the number of phrase pair instances available for a given granularity. Lastly, our method of accounting for phrase size is somewhat displeasing: While there is a hard partitioning of one-word and two-word phrases, no distinction is made between phrases of length greater than two. Marking phrase sizes greater than two explicitly by length, however, would create many sparse, low-frequency rules, and one of the strengths of PSCFG-based translation is the ability to substitute flexible-length spans into nonterminals of a derivation. A partitioning where phrase size is instead merely a feature informing the labeling process seems more desirable. We thus propose to represent each phrase pair instance (including its bilingual one-word contexts) as feature vectors, i.e., points of a vector space. We then use these data points to partition the space into clusters, and subsequently assign each phrase pair instance the cluster of its corresponding feature vector as label. The feature mapping Consider the phrase pair instance (f0)f1 · · · fm(fm+1) | (e0)e1 · · · en(en+1) (where f0, fm+1, e0, en+1 are the left and right, source and target side contexts, respectively). We begin with the case of only a single, target-side word class scheme (either a tagger or an unsupervised word clustering/POS induction method). Let C = {c1, . . . , cN} be its set of word classes. Further, let c0 be a short-hand for the result of looking up the class of a word that is out of bounds (e.g., the left context of the first word of a sentence, or the second word of a one-word phrase). We now map our phrase pair instance to the real-valued vector (where 1[P] is the indicator function defined as 1 if property P is true, and 0 otherwise): D 1[e1=c0], . . . , 1[e1=cN], 1[en=c0], . . . , 1[en=cN], αsec1[e2=c0], . . . , αsec1[e2=cN], αsec1[en−1=c0], . . . , αsec1[en−1=cN], αins Pn i=1 1[ei=c0] n , . . . , αins Pn i=1 1[ei=cN] n , αcntxt1[e0=c0], . . . , αcntxt1[e0=cN], αcntxt1[en+1=c0], . . . , αcntxt1[en+1=cN], αphrsize √ N + 1 log10(n) E The α parameters determine the influence of the different types of information. The elements in the first line represent the phrase boundary word classes, the next two lines the classes of the second and penultimate word, followed by a line representing the accumulated contents of the whole phrase, followed by two lines pertaining to the context word classes. The final element of the vector is proportional to the logarithm of the phrase length.1 We chose the logarithm assuming that length deviation of syntactic phrasal units is not constant, but proportional to the average length. Thus, all other features being equal, the distance between a two-word and a four-word phrase is 1The √ N + 1 factor serves to make the feature’s influence independent of the number of word classes by yielding the same distance (under L2) as N + 1 identical copies of the feature. 4 the same as the distance between a four-word and an eight-word phrase. We will mainly use the Euclidean (L2) distance to compare points for clustering purposes. Our feature space is thus the Euclidean vector space R7N+8. To additionally make use of source-side word classes, we append elements analogous to the ones above to the vector, all further multiplied by a parameter αsrc that allows trading off the relevance of source-side and target-side information. In the same fashion, we can incorporate multiple tagging schemes (e.g., word clusterings of different granularities) into the same feature vector. As finergrained schemes have more elements in the feature vector than coarser-grained ones, and thus exert more influence, we set the α parameter for each scheme to 1/N (where N is the number of word classes of the scheme). The K-means algorithm To create the clusters, we chose the K-means algorithm (Steinhaus, 1956; MacQueen, 1967) for both its computational efficiency and ease of implementation and parallelization. Given an initial mapping from the data points to K clusters, the procedure alternates between (i) computing the centroid of each cluster and (ii) reallocating each data point to the closest cluster centroid, until convergence. We implemented two commonly used initialization methods: Forgy and Random Partition. The Forgy method randomly chooses K observations from the data set and uses these as the initial means. The Random Partition method first randomly assigns a cluster to each observation and then proceeds straight to step (ii). Forgy tends to spread the initial means out, while Random Partition places all of them close to the center of the data set. As the resulting clusters looked similar, and Random Partition sometimes led to a high rate of empty clusters, we settled for Forgy. 5 Experiments We evaluate our approach by comparing translation quality, as evaluated by the IBM-BLEU (Papineni et al., 2002) metric on the NIST Chinese-to-English translation task using MT04 as development set to train the model parameters λ, and MT05, MT06 and MT08 as test sets. Even though a key advantage of our method is its applicability to resource-poor languages, we used a language pair for which linguistic resources are available in order to determine how close translation performance can get to a fully syntax-based system. Accordingly, we use Chiang’s hierarchical phrase based translation model (Chiang, 2007) as a base line, and the syntax-augmented MT model (Zollmann and Venugopal, 2006) as a ‘target line’, a model that would not be applicable for language pairs without linguistic resources. We perform PSCFG rule extraction and decoding using the open-source “SAMT” system (Venugopal and Zollmann, 2009), using the provided implementations for the hierarchical and syntax-augmented grammars. Apart from the language model, the lexical, phrasal, and (for the syntax grammar) labelconditioned features, and the rule, target word, and glue operation counters, Venugopal and Zollmann (2009) also provide both the hierarchical and syntax-augmented grammars with a rareness penalty 1/ cnt(r), where cnt(r) is the occurrence count of rule r in the training corpus, allowing the system to learn penalization of low-frequency rules, as well as three indicator features firing if the rule has one, two unswapped, and two swapped nonterminal pairs, respectively.2 Further, to mitigate badly estimated PSCFG derivations based on low-frequency rules of the much sparser syntax model, the syntax grammar also contains the hierarchical grammar as a backbone (cf. Zollmann and Vogel (2010) for details and empirical analysis). We implemented our rule labeling approach within the SAMT rule extraction pipeline, resulting in comparable features across all systems. For all systems, we use the bottom-up chart parsing decoder implemented in the SAMT toolkit with a reordering limit of 15 source words, and correspondingly extract rules from initial phrase pairs of maximum source length 15. All rules have at most two nonterminal symbols, which must be non-consecutive on the source side, and rules must contain at least one source-side terminal symbol. The beam settings for the hierarchical system are 600 items per ‘X’ (generic rule) cell, and 600 per ‘S’ (glue) cell.3 Due to memory limitations, the multi-nonterminal grammars have to be pruned more harshly: We al2Penalization or reward of purely-lexical rules can be indirectly learned by trading off these features with the rule counter feature. 3For comparison, Chiang (2007) uses 30 and 15, respectively, and further prunes items that deviate too much in score from the best item. He extracts initial phrases of maximum length 10. 5 low 100 ‘S’ items, and a total of 500 non-‘S’ items, but maximally 40 items per nonterminal. For all systems, we further discard non-initial rules occurring only once.4 For the multi-nonterminal systems, we generally further discard all non-generic non-initial rules occurring less than 6 times, but we additionally give results for a ‘slow’ version of the Syntax targetline system and our best word class based systems, where only single-occurrences were removed. For parameter tuning, we use the L0-regularized minimum-error-rate training tool provided by the SAMT toolkit. Each system is trained separately to adapt the parameters to its specific properties (size of nonterminal set, grammar complexity, features sparseness, reliance on the language model, etc.). The parallel training data comprises of 9.6M sentence pairs (206M Chinese and 228M English words). The source and target language parses for the syntax-augmented grammar, as well as the POS tags for our POS-based grammars were generated by the Stanford parser (Klein and Manning, 2003). The results are given in Table 1. Results for the Syntax system are consistent with previous results (Zollmann et al., 2008), indicating improvements over the hierarchical system. Our approach, using target POS tags (‘POS-tgt (no phr. s.)’), outperforms the hierarchical system on all three tests sets, and gains further improvements when accounting for phrase size (‘POS-tgt’). The latter approach is roughly on par with the corresponding Syntax system, slightly outperforming it on average, but not consistently across all test sets. The same is true for the ‘slow’ version (‘POS-tgt-slow’). The model based on bilingually tagged training instances (‘POS-src&tgt’) does not gain further improvements over the merely target-based one, but actually performs worse. We assume this is due to the huge number of nonterminals of ‘POS-src&tgt’ ((2 ∗332 + 33)(2 ∗362 + 36) = 5.8M in principle) compared to ‘POS-tgt’ (2 ∗362 + 36 = 2628), increasing the sparseness of the grammar and thus leading to less reliable statistical estimates. We also experimented with a source-tag based model (‘POS-src’). In line with previous findings for syntax-augmented grammars (Zollmann and Vogel, 2010), the source-side-based grammar does not reach the translation quality of its target-based counterpart; however, the model still outperforms the hi4As shown in Zollmann et al. (2008), the impact of these rules on translation quality is negligible. erarchical system on all test sets. Further, decoding is much faster than for ‘POS-ext-tgt’ and even slightly faster than ‘Hierarchical’. This is due to the fact that for the source-tag based approach, a given chart cell in the CYK decoder, represented by a start and end position in the source sentence, almost uniquely determines the nonterminal any hypothesis in this cell can have: Disregarding partof-speech tag ambiguity and phrase size accounting, that nonterminal will be the composition of the tags of the start and end source words spanned by that cell. At the same time, this demonstrates that there is hence less of a role for the nonterminal labels to resolve translational ambiguity in the source based model than in the target based model. Performance of the word-clustering based models To empirically validate the unsupervised clustering approaches, we first need to decide how to determine the number of word classes, N. A straightforward approach is to run experiments and report test set results for many different N. While this would allow us to reliably conclude the optimal number N, a comparison of that best-performing clustering method to the hierarchical, syntax, and POS systems would be tainted by the fact that N was effectively tuned on the test sets. We therefore choose N merely based on development set performance. Unfortunately, variance in development set BLEU scores tends to be higher than test set scores, despite of SAMT MERT’s inbuilt algorithms to overcome local optima, such as random restarts and zeroing-out. We have noticed that using an L0penalized BLEU score5 as MERT’s objective on the merged n-best lists over all iterations is more stable and will therefore use this score to determine N. Figure 1 (left) shows the performance of the distributional clustering model (‘Clust’) and its morphology-sensitive extension (‘Clust-morph’) according to this score for varying values of N = 1, . . . , 36 (the number Penn treebank POS tags, used for the ‘POS’ models, is 36).6 For ‘Clust’, we see a comfortably wide plateau of nearly-identical scores from N = 7, . . . , 15. Scores for ‘Clust-morph’ are lower throughout, and peak at N = 7. Looking back at Table 1, we now compare the clustering models chosen by the procedure above— 5Given by: BLEU −β × |{i ∈{1, . . . , K}|λi ̸= 0}|, where λ1, . . . , λK are the feature weights and the constant β (which we set to 0.00001) is the regularization penalty. 6All these models account for phrase size. 6 Dev (MT04) MT05 MT06 MT08 TestAvg Time Hierarchical 38.63 36.51 33.26 25.77 31.85 14.3 Syntax 39.39 37.09 34.01 26.53 32.54 18.1 Syntax-slow 39.69 37.56 34.66 26.93 33.05 34.6 POS-tgt (no phr. s.) 39.31 37.29 33.79 26.13 32.40 27.7 POS-tgt 39.14 37.29 33.97 26.77 32.68 19.2 POS-src 38.74 36.75 33.85 26.76 32.45 12.2 POS-src&tgt 38.78 36.71 33.65 26.52 32.29 18.8 POS-tgt-slow 39.86 37.78 34.37 27.14 33.10 44.6 Clust-7-tgt 39.24 36.74 34.00 26.93 32.56 24.3 Clust-7-morph-tgt 39.08 36.57 33.81 26.40 32.26 23.6 Clust-7-src 38.68 36.17 33.23 26.55 31.98 11.1 Clust-7-src&tgt 38.71 36.49 33.65 26.33 32.16 15.8 Clust-7-tgt-slow 39.48 37.70 34.31 27.24 33.08 45.2 kmeans-POS-src&tgt 39.11 37.23 33.92 26.80 32.65 18.5 kmeans-POS-src&tgt-L1 39.33 36.92 33.81 26.59 32.44 17.6 kmeans-POS-src&tgt-cosine 39.15 37.07 33.98 26.68 32.58 17.7 kmeans-POS-src&tgt (αins = .5) 39.07 36.88 33.71 26.26 32.28 16.5 kmeans-Clust-7-src&tgt 39.19 36.96 34.26 26.97 32.73 19.3 kmeans-Clust-7..36-src&tgt 39.09 36.93 34.24 26.92 32.70 17.3 kmeans-POS-src&tgt-slow 39.28 37.16 34.38 27.11 32.88 36.3 kmeans-Clust-7..36-s&t-slow 39.18 37.12 34.13 27.35 32.87 34.3 Table 1: Translation quality in % case-insensitive IBM-BLEU (i.e., brevity penalty based on closest reference length) for Chinese-English NIST-large translation tasks, comparing baseline Hierarchical and Syntax systems with POS and clustering based approaches proposed in this work. ‘TestAvg’ shows the average score over the three test sets. ‘Time’ is the average decoding time per sentence in seconds on one CPU. resulting in N = 7 for the morphology-unaware model (‘Clust-7-tgt’) as well as the morphologyaware model (‘Clust-7-morph-tgt’)—to the other systems. ‘Clust-7-tgt’ improves over the hierarchical base line on all three test sets and is on par with the corresponding Syntax and POS target lines. The same holds for the ‘Clust-7-tgt-slow’ version. We also experimented with a model variant based on seven source and seven target language clusters (‘Clust-7-src&tgt’) and a source-only labeled model (‘Clust-7-src’)—both performing worse. Surprisingly, the morphology-sensitive clustering model (‘Clust-7-morph-tgt’), while still improving over the hierarchical system, performs worse than the morphology-unaware model. An inspection of the trained word clusters showed that the model, while far superior to the morphologyunaware model in e.g. mapping all numbers to the same class, is overzealous in discovering morphological regularities (such as the ‘-ed’ suffix) to partition functionally only slightly dissimilar words (such present-tense and past-tense verbs) into different classes. While these subtle distinctions make for good partitionings when the number of clusters is large, they appear to lead to inferior results for our task that relies on coarse-grained partitionings of the vocabulary. Note that there are no ‘src’ or ‘src&tgt’ systems for ‘Clust-morph’, as Chinese, being a monosyllabic writing system, does not lend itself to morphology-sensitive clustering. K-means clustering based models To establish suitable values for the α parameters and investigate the impact of the number of clusters, we looked at the development performance over various parameter combinations for a K-means model based on source and/or target part-of-speech tags.7 As can be seen from Figure 1 (right), our method reaches its peak performance at around 50 clusters and then levels off slightly. Encouragingly, in contrast to the hard labeling procedure, K-means actually improves when adding source-side information. The optimal ratio of weighting source and target classes is 0.5:1, corresponding to αsrc = .5. Incorporating context information also helps, and does best for αcntxt = 0.25, i.e. when giving contexts 1/4 the influence of the phrase boundary words. 7We set αsec = .25, αins = 0, and αphrsize = .5 throughout. 7 Figure 1: Left: Performance of the distributional clustering model ‘Clust’ and its morphology-sensitive extension ‘Clust-morph’ according to L0-penalized development set BLEU score for varying numbers N of word classes. For each data point N, its corresponding n.o. nonterminals of the induced grammar is stated in parentheses. Right: Dev. set performance of K-means for various n.o. labels and values of αsrc and αcntxt. Entry ‘kmeans-POS-src&tgt’ in Table 1 shows the test set results for the development-set best Kmeans configuration (i.e., αsrc = .5, αcntxt = 0.25, and using 500 clusters). While beating the hierarchical baseline, it is only minimally better than the much simpler target-based hard labeling method ‘POS-tgt’. We also tried K-means variants in which the Euclidean distance metric is replaced by the city block distance L1 and the cosine dissimilarity, respectively, with slightly worse outcomes. Configuration ‘kmeans-POS-src&tgt (αins = .5)’ investigates the incorporation of non-boundary word tags inside the phrase. Unfortunately, these features appear to deteriorate performance, presumably because given a fixed number of clusters, accounting for contents inside the phrase comes at the cost of neglect of boundary words, which are more relevant to producing correctly reordered translations. The two completely unsupervised systems ‘kmeans-Clust-7-src&tgt’ (based on 7-class MKCLS distributional word clustering) and ‘kmeans-Clust-7..36-src&tgt’ (using six different word clustering models simultaneously: all the MKCLS models from Figure 1 (left) except for the two-, three- and five-class models) have the best results, outperforming the other K-means models as well as ‘Syntax’ and ‘POS-tgt’ on average, but not on all test sets. Lastly, we give results for ‘slow’ K-means configurations (‘kmeans-POS-src&tgt-slow’ and ‘kmeansClust-7..36-s&t-slow’). Unfortunately (or fortunately, from a pragmatic viewpoint), the models are outperformed by the much simpler ‘POS-tgt-slow’ and ‘Clust-7-tgt-slow’ models. 6 Related work Hassan et al. (2007) improve the statistical phrasebased MT model by injecting supertags, lexical information such as the POS tag of the word and its subcategorization information, into the phrase table, resulting in generalized phrases with placeholders in them. The supertags are also injected into the language model. Our approach also generates phrase labels and placeholders based on word tags (albeit in a different manner and without the use of subcategorization information), but produces PSCFG rules for use in a parsing-based decoding system. Unsupervised synchronous grammar induction, apart from the contribution of Chiang (2005) discussed earlier, has been proposed by Wu (1997) for inversion transduction grammars, but as Chiang’s model only uses a single generic nonterminal label. Blunsom et al. (2009) present a nonparametric PSCFG translation model that directly induces a grammar from parallel sentences without the use of or constraints from a word-alignment model, and 8 Cohn and Blunsom (2009) achieve the same for tree-to-string grammars, with encouraging results on small data. Our more humble approach treats the training sentences’ word alignments and phrase pairs, obtained from external modules, as ground truth and employs a straight-forward generalization of Chiang’s popular rule extraction approach to labeled phrase pairs, resulting in a PSCFG with multiple nonterminal labels. Our phrase pair clustering approach is similar in spirit to the work of Lin and Wu (2009), who use Kmeans to cluster (monolingual) phrases and use the resulting clusters as features in discriminative classifiers for a named-entity-recognition and a query classification task. Phrases are represented in terms of their contexts, which can be more than one word long; words within the phrase are not considered. Further, each context contributes one dimension per vocabulary word (not per word class as in our approach) to the feature space, allowing for the discovery of subtle semantic similarities in the phrases, but at much greater computational expense. Another distinction is that Lin and Wu (2009) work with phrase types instead of phrase instances, obtaining a phrase type’s contexts by averaging the contexts of all its phrase instances. Nagata et al. (2006) present a reordering model for machine translation, and make use of clustered phrase pairs to cope with data sparseness in the model. They achieve the clustering by reducing phrases to their head words and then applying the MKCLS tool to these pseudo-words. Kuhn et al. (2010) cluster the phrase pairs of an SMT phrase table based on their co-occurrence counts and edit distances in order to arrive at semantically similar phrases for the purpose of phrase table smoothing. The clustering proceeds in a bottom-up fashion, gradually merging similar phrases while alternating back and forth between the two languages. 7 Conclusion and discussion In this work we proposed methods of labeling phrase pairs to create automatically learned PSCFG rules for machine translation. Crucially, our methods only rely on “shallow” lexical tags, either generated by POS taggers or by automatic clustering of words into classes. Evaluated on a Chinese-to-English translation task, our approach improves translation quality over a popular PSCFG baseline—the hierarchical model of Chiang (2005) —and performs on par with the model of Zollmann and Venugopal (2006), using heuristically generated labels from parse trees. Using automatically obtained word clusters instead of POS tags yields essentially the same results, thus making our methods applicable to all languages pairs with parallel corpora, whether syntactic resources are available for them or not. We also propose a more flexible way of obtaining the phrase labels from word classes using K-means clustering. While currently the simple hard-labeling methods perform just as well, we hope that the ease of incorporating new features into the K-means labeling method will spur interesting future research. When considering the constraints and independence relationships implied by each labeling approach, we can distinguish between approaches that label rules differently within the context of the sentence that they were extracted from, and those that do not. The Syntax system from Zollmann and Venugopal (2006) is at one end of this extreme. A given target span might be labeled differently depending on the syntactic analysis of the sentence that it is a part of. On the other extreme, the clustering based approach labels phrases based on the contained words alone.8 The POS grammar represents an intermediate point on this spectrum, since POS tags can change based on surrounding words in the sentence; and the position of the K-means model depends on the influence of the phrase contexts on the clustering process. Context insensitive labeling has the advantage that there are less alternative lefthand-side labels for initial rules, producing grammars with less rules, whose weights can be more accurately estimated. This could explain the strong performance of the word-clustering based labeling approach. All source code underlying this work is available under the GNU Lesser General Public License as part of the Hadoop-based ‘SAMT’ system at: www.cs.cmu.edu/˜zollmann/samt Acknowledgments We thank Jakob Uszkoreit and Ashish Venugopal for helpful comments and suggestions and Yahoo! for the access to the M45 supercomputing cluster. 8Note, however, that the creation of clusters itself did take the context of the clustered words into account. 9 References Phil Blunsom, Trevor Cohn, Chris Dyer, and Miles Osborne. 2009. A Gibbs sampler for phrasal synchronous grammar induction. In Proceedings of ACL, Singapore, August. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2). David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, Hawaii, October. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). David Chiang. 2007. Hierarchical phrase based translation. Computational Linguistics, 33(2). Alexander Clark. 2003. Combining distributional and morphological information for part of speech induction. In Proceedings of the European chapter of the Association for Computational Linguistics (EACL), pages 59–66. Trevor Cohn and Phil Blunsom. 2009. A Bayesian model of syntax-directed tree to string grammar induction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP), Singapore. Michael Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics Conference (HLT/NAACL). Hany Hassan, Khalil Sima’an, and Andy Way. 2007. Supertagged phrase-based statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, June. Dan Klein and Christoper Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Reinhard Kneser and Hermann Ney. 1993. Improved clustering techniques for class-based statistical language modelling. In Proceedings of the 3rd European Conference on Speech Communication and Technology, pages 973–976, Berlin, Germany. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics Conference (HLT/NAACL). Roland Kuhn, Boxing Chen, George Foster, and Evan Stratford. 2010. Phrase clustering for smoothing TM probabilities - or, how to extract paraphrases from phrase tables. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 608–616, Beijing, China, August. Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL). Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. J. B. MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In L. M. Le Cam and J. Neyman, editors, Proc. of the fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281–297. University of California Press. Daniel Marcu, Wei Wang, Abdessamad Echihabi, and Kevin Knight. 2006. SPMT: Statistical machine translation with syntactified target language phrases. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Sydney, Australia. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Masaaki Nagata, Kuniko Saito, Kazuhide Yamamoto, and Kazuteru Ohashi. 2006. A clustered global phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, ACL-44, pages 713–720. Franz Josef Och. 1999. An efficient method for determining bilingual word classes. In Proceedings of the European chapter of the Association for Computational Linguistics (EACL), pages 71–76. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Hugo Steinhaus. 1956. Sur la division des corps mat´eriels en parties. Bull. Acad. Polon. Sci. Cl. III. 4, pages 801–804. 10 Ashish Venugopal and Andreas Zollmann. 2009. Grammar based statistical MT on Hadoop: An end-to-end toolkit for large scale PSCFG based MT. The Prague Bulletin of Mathematical Linguistics, 91:67–78. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3). Andreas Zollmann and Ashish Venugopal. 2006. Syntax augmented machine translation via chart parsing. In Proceedings of the Workshop on Statistical Machine Translation, HLT/NAACL. Andreas Zollmann and Stephan Vogel. 2010. New parameterizations and features for PSCFG-based machine translation. In Proceedings of the 4th Workshop on Syntax and Structure in Statistical Translation (SSST), Beijing, China. Andreas Zollmann, Ashish Venugopal, Franz J. Och, and Jay Ponte. 2008. A systematic comparison of phrasebased, hierarchical and syntax-augmented statistical MT. In Proceedings of the Conference on Computational Linguistics (COLING). 11
2011
1
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 93–101, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Semi-Supervised SimHash for Efficient Document Similarity Search Qixia Jiang and Maosong Sun State Key Laboratory on Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Sci. and Tech., Tsinghua University, Beijing 100084, China [email protected], [email protected] Abstract Searching documents that are similar to a query document is an important component in modern information retrieval. Some existing hashing methods can be used for efficient document similarity search. However, unsupervised hashing methods cannot incorporate prior knowledge for better hashing. Although some supervised hashing methods can derive effective hash functions from prior knowledge, they are either computationally expensive or poorly discriminative. This paper proposes a novel (semi-)supervised hashing method named Semi-Supervised SimHash (S3H) for high-dimensional data similarity search. The basic idea of S3H is to learn the optimal feature weights from prior knowledge to relocate the data such that similar data have similar hash codes. We evaluate our method with several state-of-the-art methods on two large datasets. All the results show that our method gets the best performance. 1 Introduction Document Similarity Search (DSS) is to find similar documents to a query doc in a text corpus or on the web. It is an important component in modern information retrieval since DSS can improve the traditional search engines and user experience (Wan et al., 2008; Dean et al., 1999). Traditional search engines accept several terms submitted by a user as a query and return a set of docs that are relevant to the query. However, for those users who are not search experts, it is always difficult to accurately specify some query terms to express their search purposes. Unlike short-query based search, DSS queries by a full (long) document, which allows users to directly submit a page or a document to the search engines as the description of their information needs. Meanwhile, the explosion of information has brought great challenges to traditional methods. For example, Inverted List (IL) which is a primary key-term access method would return a very large set of docs for a query document, which leads to the time-consuming post-processing. Therefore, a new effective algorithm is required. Hashing methods can perform highly efficient but approximate similarity search, and have gained great success in many applications such as Content-Based Image Retrieval (CBIR) (Ke et al., 2004; Kulis et al., 2009b), near-duplicate data detection (Ke et al., 2004; Manku et al., 2007; Costa et al., 2010), etc. Hashing methods project high-dimensional objects to compact binary codes called fingerprints and make similar fingerprints for similar objects. The similarity search in the Hamming space1 is much more efficient than in the original attribute space (Manku et al., 2007). Recently, several hashing methods have been proposed. Specifically, SimHash (SH) (Charikar M.S., 2002) uses random projections to hash data. Although it works well with long fingerprints, SH has poor discrimination power for short fingerprints. A kernelized variant of SH, called Kernelized Locality Sensitive Hashing (KLSH) (Kulis et al., 2009a), is proposed to handle non-linearly separable data. These methods are unsupervised thus cannot incorporate prior knowledge for better hashing. Moti1Hamming space is a set of binary strings of length L. 93 vated by this, some supervised methods are proposed to derive effective hash functions from prior knowledge, i.e., Spectral Hashing (Weiss et al., 2009) and Semi-Supervised Hashing (SSH) (Wang et al., 2010a). Regardless of different objectives, both methods derive hash functions via Principle Component Analysis (PCA) (Jolliffe, 1986). However, PCA is computationally expensive, which limits their usage for high-dimensional data. This paper proposes a novel (semi-)supervised hashing method, Semi-Supervised SimHash (S3H), for high-dimensional data similarity search. Unlike SSH that tries to find a sequence of hash functions, S3H fixes the random projection directions and seeks the optimal feature weights from prior knowledge to relocate the objects such that similar objects have similar fingerprints. This is implemented by maximizing the empirical accuracy on the prior knowledge (labeled data) and the entropy of hash functions (estimated over labeled and unlabeled data). The proposed method avoids using PCA which is computationally expensive especially for high-dimensional data, and leads to an efficient Quasi-Newton based solution. To evaluate our method, we compare with several state-ofthe-art hashing methods on two large datasets, i.e., 20 Newsgroups (20K points) and Open Directory Project (ODP) (2.4 million points). All experiments show that S3H gets the best search performance. This paper is organized as follows: Section 2 briefly introduces the background and some related works. In Section 3, we describe our proposed SemiSupervised SimHash (S3H). Section 4 provides experimental validation on two datasets. The conclusions are given in Section 5. 2 Background and Related Works Suppose we are given a set of N documents, X = {xi | xi ∈RM}N i=1. For a given query doc q, DSS tries to find its nearest neighbors in X or a subset X ′ ⊂X in which distance from the documents to the query doc q is less than a give threshold. However, such two tasks are computationally infeasible for large-scale data. Thus, it turns to the approximate similarity search problem (Indyk et al., 1998). In this section, we briefly review some related approximate similarity search methods. 2.1 SimHash SimHash (SH) is first proposed by Charikar (Charikar M.S., 2002). SH uses random projections as hash functions, i.e., h(x) = sign(wT x) = { +1, if wT x ≥0 −1, otherwise (1) where w ∈RM is a vector randomly generated. SH specifies the distribution on a family of hash functions H = {h} such that for two objects xi and xj, Pr h∈H{h(xi) = h(xj)} = 1 −θ(xi, xj) π (2) where θ(xi, xj) is the angle between xi and xj. Obviously, SH is an unsupervised hashing method. 2.2 Kernelized Locality Sensitive Hashing A kernelized variant of SH, named Kernelized Locality Sensitive Hashing (KLSH) (Kulis et al., 2009a), is proposed for non-linearly separable data. KLSH approximates the underling Gaussian distribution in the implicit embedding space of data based on central limit theory. To calculate the value of hashing fuction h(·), KLSH projects points onto the eigenvectors of the kernel matrix. In short, the complete procedure of KLSH can be summarized as follows: 1) randomly select P (a small value) points from X and form the kernel matrix, 2) for each hash function h(ϕ(x)), calculate its weight ω ∈RP just as Kernel PCA (Sch¨olkopf et al., 1997), and 3) the hash function is defined as: h(ϕ(x)) = sign( P ∑ i=1 ωi · κ(x, xi)) (3) where κ(·, ·) can be any kernel function. KLSH can improve hashing results via the kernel trick. However, KLSH is unsupervised, thus designing a data-specific kernel remains a big challenge. 2.3 Semi-Supervised Hashing Semi-Supervised Hashing (SSH) (Wang et al., 2010a) is recently proposed to incorporate prior knowledge for better hashing. Besides X, prior knowledge in the form of similar and dissimilar object-pairs is also required in SSH. SSH tries to find L optimal hash functions which have maximum 94 empirical accuracy on prior knowledge and maximum entropy by finding the top L eigenvectors of an extended covariance matrix2 via PCA or SVD. However, despite of the potential problems of numerical stability, SVD requires massive computational space and O(M3) computational time where M is feature dimension, which limits its usage for high-dimensional data (Trefethen et al., 1997). Furthermore, the variance of directions obtained by PCA decreases with the decrease of the rank (Jolliffe, 1986). Thus, lower hash functions tend to have smaller entropy and larger empirical errors. 2.4 Others Some other related works should be mentioned. A notable method is Locality Sensitive Hashing (LSH) (Indyk et al., 1998). LSH performs a random linear projection to map similar objects to similar hash codes. However, LSH suffers from the efficiency problem that it tends to generate long codes (Salakhutdinov et al., 2007). LAMP (Mu et al., 2009) considers each hash function as a binary partition problem as in SVMs (Burges, 1998). Spectral Hashing (Weiss et al., 2009) maintains similarity between objects in the reduced Hamming space by minimizing the averaged Hamming distance3 between similar neighbors in the original Euclidean space. However, spectral hashing takes the assumption that data should be distributed uniformly, which is always violated in real-world applications. 3 Semi-Supervised SimHash In this section, we present our hashing method, named Semi-Supervised SimHash (S3H). Let XL = {(x1, c1) . . . (xu, cu)} be the labeled data, c ∈ {1 . . . C}, x ∈RM, and XU = {xu+1 . . . xN} the unlabeled data. Let X = XL ∪XU. Given the labeled data XL, we construct two sets, attraction set Θa and repulsion set Θr. Specifically, any pair (xi, xj) ∈Θa, i, j ≤u, denotes that xi and xj are in the same class, i.e., ci = cj, while any pair (xi, xj) ∈Θr, i, j ≤u, denotes that ci ̸= cj. Unlike 2The extended covariance matrix is composed of two components, one is an unsupervised covariance term and another is a constraint matrix involving labeled information. 3Hamming distance is defined as the number of bits that are different between two binary strings. previews works that attempt to find L optimal hyperplanes, the basic idea of S3H is to fix L random hyperplanes and to find an optimal feature-weight vector to relocate the objects such that similar objects have similar codes. 3.1 Data Representation Since L random hyperplanes are fixed, we can represent a object x ∈X as its relative position to these random hyperplanes, i.e., D = Λ · V (4) where the element Vml ∈{+1, −1, 0} of V indicates that the object x is above, below or just in the l-th hyperplane with respect to the m-th feature, and Λ = diag(|x1|, |x2|, . . . , |xM|) is a diagonal matrix which, to some extent, reflects the distance from x to these hyperplanes. 3.2 Formulation Hashing maps the data set X to an L-dimensional Hamming space for compact representations. If we represent each object as Equation (4), the l-th hash function is then defined as: hl(x) = ℏl(D) = sign(wT dl) (5) where w ∈RM is the feature weight to be determined and dl is the l-th column of the matrix D. Intuitively, the ”contribution” of a specific feature to different classes is different. Therefore, we hope to incorporate this side information in S3H for better hashing. Inspired by (Madani et al., 2009), we can measure this contribution over XL as in Algorithm 1. Clearly, if objects are represented as the occurrence numbers of features, the output of Algorithm 1 is just the conditional probability Pr(class|feature). Finally, each object (x, c) ∈XL can be represented as an M × L matrix G: G = diag(ν1,c, ν2,c, . . . , νM,c) · D (6) Note that, one pair (xi, xj) in Θa or Θr corresponds to (Gi, Gj) while (Di, Dj) if we ignore features’ contribution to different classes. Furthermore, we also hope to maximize the empirical accuracy on the labeled data Θa and Θr and 95 Algorithm 1: Feature Contribution Calculation for each (x, c) ∈XL do for each f ∈x do νf ←νf + xf; νf,c ←νf,c + xf; end end for each feature f and class c do νf,c ← νf,c νf ; end maximize the entropy of hash functions. So, we define the following objective for ℏ(·)s: J(w) = 1 Np L ∑ l=1 { ∑ (xi,xj)∈Θa ℏl(xi)ℏl(xj) − ∑ (xi,xj)∈Θr ℏl(xi)ℏl(xj) } + λ1 L ∑ l=1 H(ℏl) (7) where Np = |Θa| + |Θr| is the number of attraction and repulsion pairs and λ1 is a tradeoff between two terms. Wang et al. have proven that hash functions with maximum entropy must maximize the variance of the hash values, and vice-versa (Wang et al., 2010b). Thus, H(ℏ(·)) can be estimated over the labeled and unlabeled data, XL and XU. Unfortunately, direct solution for above problem is non-trivial since Equation (7) is not differentiable. Thus, we relax the objective and add an additional regularization term which could effectively avoid overfitting. Finally, we obtain the total objective: L(w) = 1 Np L ∑ l=1 { ∑ (Gi,Gj )∈Θa ψ(wT gi,l)ψ(wT gj,l) − ∑ (Gi,Gj )∈Θr ψ(wT gi,l)ψ(wT gj,l)} + λ1 2N L ∑ l=1 { u ∑ i=1 ψ2(wT gi,l) + N ∑ i=u+1 ψ2(wT di,l)} −λ2 2 ∥w∥2 2 (8) where gi,l and di,l denote the l-th column of Gi and Di respectively, and ψ(t) is a piece-wise linear function defined as: ψ(t) =    Tg t > Tg t −Tg ≤t ≤Tg −Tg t < −Tg (9) This relaxation has a good intuitive explanation. That is, similar objects are desired to not only have the similar fingerprints but also have sufficient large projection magnitudes, while dissimilar objects are desired to not only differ in their fingerprints but also have large projection margin. However, we do not hope that a small fraction of object-pairs with very large projection magnitude or margin dominate the complete model. Thus, a piece-wise linear function ψ(·) is applied in S3H. As a result, Equation (8) is a simply unconstrained optimization problem, which can be efficiently solved by a notable Quasi-Newton algorithm, i.e., L-BFGS (Liu et al., 1989). For description simplicity, only attraction set Θa is considered and the extension to repulsion set Θr is straightforward. Thus, the gradient of L(w) is as follows: ∂L(w) ∂w = 1 Np L ∑ l=1 { ∑ (Gi, Gj) ∈Θa, |wT gi,l| ≤Tg ψ(wT gj,l) · gi,l + ∑ (Gi, Gj) ∈Θa, |wT gj,l| ≤Tg ψ(wT gi,l) · gj,l } (10) + λ1 N L ∑ l=1 { u ∑ i = 1, |wT gi,l| ≤Tg ψ(wT gi,l) · gi,l + N ∑ i = u + 1, |wT di,l| ≤Tg ψ(wT di,l) · di,l } −λ2w Note that ∂ψ(t)/∂t = 0 when |t| > Tg. 3.3 Fingerprint Generation When we get the optimal weight w∗, we generate fingerprints for given objects through Equation (5). Then, it tunes to the problem how to efficiently obtain the representation as in Figure 4 for a object. After analysis, we find: 1) hyperplanes are randomly generated and we only need to determine which sides of these hyperplanes the given object lies on, and 2) in real-world applications, objects such as docs are always very sparse. Thus, we can avoid heavy computational demands and efficiently generate fingerprints for objects. In practice, given an object x, the procedure of generating an L-bit fingerprint is as follows: it maintains an L-dimensional vector initialized to zero. Each feature f ∈x is firstly mapped to an L-bit hash value by Jenkins Hashing Function4. Then, 4http://www.burtleburtle.net/bob/hash/doobs.html 96 Algorithm 2: Fast Fingerprint Generation INPUT: x and w∗; initialize α ←0, β ←0, α, β ∈RL; for each f ∈x do randomly project f to hf ∈{−1, +1}L; α ←α + xf · w∗ f · hf; end for l = 1 to L do if αl > 0 then βl ←1; end end RETURN β; these L bits increment or decrement the L components of the vector by the value xf × w∗ f. After all features processed, the signs of components determine the corresponding bits of the final fingerprint. The complete algorithm is presented in Algorithm 2. 3.4 Algorithmic Analysis This section briefly analyzes the relation between S3H and some existing methods. For analysis simplicity, we assume ψ(t) = t and ignore the regularization terms. So, Equation (8) can be rewritten as follows: J(w)S3H = 1 2wT [ L ∑ l=1 Γl(Φ+ −Φ−)ΓT l ]w (11) where Φ+ ij equals to 1 when (xi, xj) ∈Θa otherwise 0, Φ− ij equals to 1 when (xi, xj) ∈Θr otherwise 0, and Γl = [g1,l . . . gu,l, du+1,l . . . dN,l]. We denote ∑ l ΓlΦ+ΓT l and ∑ l ΓlΦ−ΓT l as S+ and S− respectively. Therefore, maximizing above function is equivalent to maximizing the following: eJ(w)S3H = |wT S+w| |wT S−w| (12) Clearly, Equation (12) is analogous to Linear Discriminant Analysis (LDA) (Duda et al., 2000) except for the difference: 1) measurement. S3H uses similarity while LDA uses distance. As a result, the objective function of S3H is just the reciprocal of LDA’s. 2) embedding space. LDA seeks the best separative direction in the original attribute space. In contrast, S3H firstly maps data from RM to RM×L through the following projection function ϕ(x) = x · [diag(sign(r1)), . . . , diag(sign(rL))] (13) where rl ∈RM, l = 1, . . . , L, are L random hyperplanes. Then, in that space (RM×L), S3H seeks a direction5 that can best separate the data. From this point of view, it is obvious that the basic SH is a special case of S3H when w is set to e = [1, 1, . . . , 1]. That is, SH firstly maps the data via ϕ(·) just as S3H. But then, SH directly separates the data in that feature space at the direction e. Analogously, we ignore the regularization terms in SSH and rewrite the objective of SSH as: J(W)SSH = 1 2 tr[WT X(Φ+ −Φ−)XT W] (14) where W = [w1, . . . , wL] ∈RM×L are L hyperplanes and X = [x1, . . . , xN]. Maximizing this objective is equivalent to maximizing the following: eJ(W)SSH = | tr[WT S′+W]| | tr[WT S′−W]| (15) where S′+ = XΦ+XT and S′−= XΦ−XT . Equation (15) shows that SSH is analogous to Multiple Discriminant Analysis (MDA) (Duda et al., 2000). In fact, SSH uses top L best-separative hyperplanes in the original attribute space found via PCA to hash the data. Furthermore, we rewrite the projection function ϕ(·) in S3H as: ϕ(x) = x · [R1, . . . , RL] (16) where Rl = diag(sign(rl)). Each Rl is a mapping from RM to RM and corresponds to one embedding space. From this perspective, unlike SSH, S3H globally seeks a direction that can best separate the data in L different embedding spaces simultaneously. 4 Experiments We use two datasets 20 Newsgroups and Open Directory Project (ODP) in our experiments. Each document is represented as a vector of occurrence numbers of the terms within it. The class information of docs is considered as prior knowledge that two docs within a same class should have more similar fingerprints while two docs within different classes should have dissimilar fingerprints. We will demonstrate that our S3H can effectively incorporate this prior knowledge to improve the DSS performance. 5The direction is determined by concatenating w L times. 97 24 32 40 48 56 64 0.1 0.2 0.3 0.4 0.5 M e a n A v e ra ge d Pre c is ion (M A P) Number of bits S 3 H S 3 H f SSH SH KLSH (a) 24 32 40 48 56 64 0.1 0.2 0.3 0.4 0.5 M e a n A v e ra ge d Pre c is ion (M A P) Number of bits S 3 H S 3 H f SSH SH KLSH (b) Figure 1: Mean Averaged Precision (MAP) for different number of bits for hash ranking on 20 Newsgroups. (a) 10K features. (b) 30K features. We use Inverted List (IL) (Manning et al., 2002) as the baseline. In fact, given a query doc, IL returns all the docs that contain any term within it. We also compare our method with three state-ofthe-art hashing methods, i.e., KLSH, SSH and SH. In KLSH, we adopt the RBF kernel κ(xi, xj) = exp(−∥xi−xj∥2 2 δ2 ), where the scaling factor δ2 takes 0.5 and the other two parameters p and t are set to be 500 and 50 respectively. The parameter λ in SSH is set to 1. For S3H, we simply set the parameters λ1 and λ2 in Equation (8) to 4 and 0.5 respectively. To objectively reflect the performance of S3H, we evaluate our S3H with and without Feature Contribution Calculation algorithm (FCC) (Algorithm 1). Specifically, FCC-free S3H (denoted as S3Hf) is just a simplification when Gs in S3H are simply set to Ds. For quantitative evaluation, as in literature (Wang et al., 2010b; Mu et al., 2009), we calculate the precision under two scenarios: hash lookup and hash ranking. For hash lookup, the proportion of good neighbors (have the same class label as the query) among the searched objects within a given Hamming radius is calculated as precision. Similarly to (Wang et al., 2010b; Weiss et al., 2009), for a query document, if no neighbors within the given Hamming radius can be found, it is considered as zero precision. Note that, the precision of IL is the proportion of good neighbors among the whole searched objects. For hash ranking, all the objects in X are ranked in terms of their Hamming distance from the query document, and the top K nearest neighbors are returned as the result. Then, Mean Averaged Precision (MAP) (Manning et al., 2002) is calculated. We also calculate the averaged intra- and inter- class Hamming distance for various hashing methods. In24 32 40 48 56 64 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Pre c is ion w ithin H a m m ing ra dius 3 Number of bits S 3 H S 3 H f SSH SH KLSH IL (a) 24 32 40 48 56 64 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Pre c is ion w ithin H a m m ing ra dius 3 Number of bits S 3 H S 3 H f SSH SH KLSH IL (b) Figure 2: Precision within Hamming radius 3 for hash lookup on 20 Newsgroups. (a) 10K features. (b) 30K features. tuitively, a good hashing method should have small intra-class distance while large inter-class distance. We test all the methods on a PC with a 2.66 GHz processor and 12GB RAM. All experiments repeate 10 times and the averaged results are reported. 4.1 20 Newsgroups 20 Newsgroups6 contains 20K messages, about 1K messages from each of 20 different newsgroups. The entire vocabulary includes 62,061 words. To evaluate the performance for different feature dimensions, we use Chi-squared feature selection algorithm (Forman, 2003) to select 10K and 30K features. The averaged message length is 54.1 for 10K features and 116.2 for 30K features. We randomly select 4K massages as the test set and the remain 16K as the training set. To train SSH and S3H, from the training set, we randomly generate 40K message-pairs as Θa and 80K message-pairs as Θr. For hash ranking, Figure 1 shows MAP for various methods using different number of bits. It shows that performance of SSH decreases with the growing of hash bits. This is mainly because the variance of the directions obtained by PCA decreases with the decrease of their ranks. Thus, lower bits have larger empirical errors. For S3H, FCC (Algorithm 1) can significantly improve the MAP just as discussed in Section 3.2. Moreover, the MAP of FCC-free S3H (S3Hf) is affected by feature dimensions while FCC-based (S3H) is relatively stable. This implies FCC can also improve the satiability of S3H. As we see, S3Hf ignores the contribution of features to different classes. However, besides the local description of data locality in the form of object-pairs, such 6http://www.cs.cmu.edu/afs/cs/project/theo-3/www/ 98 S 3 H S 3 H f SSH SH KLSH IL 24 32 40 48 56 64 10 -1 10 0 10 1 10 2 10 3 10 4 N um be r of s e a rc he d da ta Number of bits (a) S 3 H S 3 H f SSH SH KLSH IL 24 32 40 48 56 64 10 -1 10 0 10 1 10 2 10 3 10 4 N um be r of s e a rc he d da ta Number of bits (b) Figure 3: Averaged searched sample numbers using 4K query messages for hash lookup. (a) 10K features. (b) 30K features. (global) information also provides a proper guidance for hashing. So, for S3Hf, the reason why its results with 30K features are worse than the results with 10K features is probably because S3Hf learns to hash only according to the local description of data locality and many not too relevant features lead to relatively poor description. In contrast, S3H can utilize global information to better understand the similarity among objects. In short, S3H obtains the best MAP for all bits and feature dimensions. For hash lookup, Figure 2 presents the precision within Hamming radius 3 for different number of bits. It shows that IL even outperforms SH. This is because few objects can be hashed by SH into one hash bucket. Thus, for many queries, SH fails to return any neighbor even in a large Hamming radius of 3. Clearly, S3H outperforms all the other methods for different number of hash bits and features. The number of messages searched by different methods are reported in Figure 3. We find that the number of searched data of S3H (with/without FCC) decreases much more slowly than KLSH, SH and SSH with the growing of the number of hash bits. As discussed in Section 3.4, this mainly benefits from the design of S3H that S3H (globally) seeks a direction that can best separate the data in L embedding spaces simultaneously. We also find IL returns a large number of neighbors of each query message which leads to its poor efficiency. The averaged intra- and inter- class Hamming distance of different methods are reported in Table 1. As it shows, S3H has relatively larger margin (∆) between intra- and inter-class Hamming distance. This indicates that S3H is more effective to make similar points have similar fingerprints while keep intra-class inter-class ∆ S3H 13.1264 15.6342 2.5078 S3Hf 12.5754 13.3479 0.7725 SSH 6.4134 6.5262 0.1128 SH 15.3908 15.6339 0.2431 KLSH 10.2876 10.8713 0.5841 Table 1: Averaged intra- and inter- class Hamming distance of 20 Newsgroups for 32-bit fingerprint. ∆is the difference between the averaged inter- and intra- class Hamming distance. Large ∆implies good hashing. 10 20 30 40 10 0 10 1 10 2 10 3 T im e (sec.) Feature dimension (K) S 3 H SSH SH KLSH IL (a) 10 20 30 40 10 1 10 2 10 3 10 4 Sp ace (M B ) Feature dimension (K) S 3 H SSH SH KLSH IL (b) Figure 4: Computational complexity of training for different feature dimensions for 32-bit fingerprint. (a) Training time (sec). (b) Training space cost (MB). the dissimilar points away enough from each other. Figure 4 shows the (training) computational complexity of different methods. We find that the time and space cost of SSH grows much faster than SH, KLSH and S3H with the growing of feature dimension. This is mainly because SSH requires SVD to find the optimal hashing functions which is computational expensive. Instead, S3H seeks the optimal feature weights via L-BFGS, which is still efficient even for very high-dimensional data. 4.2 Open Directory Project (ODP) Open Directory Project (ODP)7 is a multilingual open content directory of web links (docs) organized by a hierarchical ontology scheme. In our experiment, only English docs8 at level 3 of the category tree are utilized to evaluate the performance. In short, the dataset contains 2,483,388 docs within 6,008 classes. There are totally 862,050 distinct words and each doc contains 14.13 terms on average. Since docs are too short, we do not conduct 7http://rdf.dmoz.org/ 8The title together with the corresponding short description of a page are considered as a document in our experiments. 99 1 10 100 1k 10k 100k 0.00 0.01 0.02 0.03 0.04 Percen tag e Class size (a) 0 20 40 60 80 100 120 0.00 0.02 0.04 0.06 0.08 0.10 Percen tag e Document length (b) Figure 5: Overview of ODP data set. (a) Class distribution at level 3. (b) Distribution of document length. intra-class inter-class ∆ S3H 14.0029 15.9508 1.9479 S3Hf 14.3801 15.5260 1.1459 SH 14.7725 15.6432 0.8707 KLSH 9.3382 10.5700 1.2328 Table 2: Averaged intra- and inter- class Hamming distance of ODP for 32-bit fingerprint (860K features). ∆ is the difference between averaged intra- and inter- class Hamming distance. feature selection9. An overview of ODP is shown in Figure 5. We randomly sample 10% docs as the test set and the remain as the training set. Furthermore, from training set, we randomly generate 800K docpairs as Θa, and 1 million doc-pairs as Θr. Note that, since there are totally over 800K features, it is extremely inefficient to train SSH. Therefore, we only compare our S3H with IL, KLSH and SH. The search performance is given in Figure 6. Figure 6(a) shows the MAP for various methods using different number of bits. It shows KLSH outperforms SH, which mainly contributes to the kernel trick. S3H and S3Hf have higher MAP than KLSH and SH. Clearly, FCC algorithm can improve the MAP of S3H for all bits. Figure 6(b) presents the precision within Hamming radius 2 for hash lookup. We find that IL outperforms SH since SH fails for many queries. It also shows that S3H (with FCC) can obtain the best precision for all bits. Table 2 reports the averaged intra- and inter-class Hamming distance for various methods. It shows that S3H has the largest margin (∆). This demon9We have tested feature selection. However, if we select 40K features via Chi-squared feature selection method, documents are represented by 3.15 terms on average. About 44.9% documents are represented by no more than 2 terms. 24 32 40 48 56 64 0.15 0.20 0.25 0.30 0.35 M ean A verag ed Precisio n (M A P) Number of bits S 3 H S 3 H f SH KLSH (a) 24 32 40 48 56 64 0.03 0.06 0.09 0.12 0.15 0.18 Pre c is ion w ithin H a m m ing ra dius 2 Number of bits S 3 H S 3 H f SH KLSH IL (b) Figure 6: Retrieval performance of different methods on ODP. (a) Mean Averaged Precision (MAP) for different number of bits for hash ranking. (b) Precision within Hamming radius 2 for hash lookup. strates S3H can measure the similarity among the data better than KLSH and SH. We should emphasize that KLSH needs 0.3ms to return the results for a query document for hash lookup, and S3H needs <0.1ms. In contrast, IL requires about 75ms to finish searching. This is mainly because IL always returns a large number of objects (dozens or hundreds times more than S3H and KLSH) and requires much time for post-processing. All the experiments show S3H is more effective, efficient and stable than the baseline method and the state-of-the-art hashing methods. 5 Conclusions We have proposed a novel supervised hashing method named Semi-Supervised Simhash (S3H) for high-dimensional data similarity search. S3H learns the optimal feature weights from prior knowledge to relocate the data such that similar objects have similar fingerprints. This is implemented by maximizing the empirical accuracy on labeled data together with the entropy of hash functions. The proposed method leads to a simple Quasi-Newton based solution which is efficient even for very highdimensional data. Experiments performed on two large datasets have shown that S3H has better search performance than several state-of-the-art methods. 6 Acknowledgements We thank Fangtao Li for his insightful suggestions. We would also like to thank the anonymous reviewers for their helpful comments. This work is supported by the National Natural Science Foundation of China under Grant No. 60873174. 100 References Christopher J.C. Burges. 1998. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 2(2):121-167. Moses S. Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings of the 34th annual ACM symposium on Theory of computing, pages 380-388. Gianni Costa, Giuseppe Manco and Riccardo Ortale. 2010. An incremental clustering scheme for data deduplication. Data Mining and Knowledge Discovery, 20(1):152-187. Jeffrey Dean and Monika R. Henzinge. 1999. Finding Related Pages in the World Wide Web. Computer Networks, 31:1467-1479. Richard O. Duda, Peter E. Hart and David G. Stork. 2000. Pattern classification, 2nd edition. WileyInterscience. George Forman 2003. An extensive empirical study of feature selection metrics for text classification. The Journal of Machine Learning Research, 3:1289-1305. Piotr Indyk and Rajeev Motwani. 1998. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the 30th annual ACM symposium on Theory of computing, pages 604-613. Ian Jolliffe. 1986. Principal Component Analysis. Springer-Verlag, New York. Yan Ke, Rahul Sukthankar and Larry Huston. 2004. Efficient near-duplicate detection and sub-image retrieval. In Proceedings of the ACM International Conference on Multimedia. Brian Kulis and Kristen Grauman. 2009. Kernelized locality-sensitive hashing for scalable image search. In Proceedings of the 12th International Conference on Computer Vision, pages 2130-2137. Brian Kulis, Prateek Jain and Kristen Grauman. 2009. Fast similarity search for learned metrics. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 2143-2157. Dong C. Liu and Jorge Nocedal. 1989. On the limited memory BFGS method for large scale optimization. Mathematical programming, 45(1): 503-528. Omid Madani, Michael Connor and Wiley Greiner. 2009. Learning when concepts abound. The Journal of Machine Learning Research, 10:2571-2613. Gurmeet Singh Manku, Arvind Jain and Anish Das Sarma. 2007. Detecting near-duplicates for web crawling. In Proceedings of the 16th international conference on World Wide Web, pages 141-150. Christopher D. Manning, Prabhakar Raghavan and Hinrich Sch¨utze. 2002. An introduction to information retrieval. Spring. Yadong Mu, Jialie Shen and Shuicheng Yan. 2010. Weakly-Supervised Hashing in Kernel Space. In Proceedings of International Conference on Computer Vision and Pattern Recognition, pages 3344-3351. Ruslan Salakhutdinov and Geoffrey Hintona. 2007. Semantic hashing. In SIGIR workshop on Information Retrieval and applications of Graphical Models. Bernhard Sch¨olkopf, Alexander Smola and Klaus-Robert M¨uller. 1997. Kernel principal component analysis. Advances in Kernel Methods - Support Vector Learning, pages 583-588. MIT. Lloyd N. Trefethen and David Bau. 1997. Numerical linear algebra. Society for Industrial Mathematics. Xiaojun Wan, Jianwu Yang and Jianguo Xiao. 2008. Towards a unified approach to document similarity search using manifold-ranking of blocks. Information Processing & Management, 44(3):1032-1048. Jun Wang, Sanjiv Kumar and Shih-Fu Chang. 2010a. Semi-Supervised Hashing for Scalable Image Retrieval. In Proceedings of International Conference on Computer Vision and Pattern Recognition, pages 3424-3431. Jun Wang, Sanjiv Kumar and Shih-Fu Chang. 2010b. Sequential Projection Learning for Hashing with Compact Codes. In Proceedings of International Conference on Machine Learning. Yair Weiss, Antonio Torralba and Rob Fergus. 2009. Spectral hashing. In Proceedings of Advances in Neural Information Processing Systems. 101
2011
10
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 997–1006, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Automatically Evaluating Text Coherence Using Discourse Relations Ziheng Lin, Hwee Tou Ng and Min-Yen Kan Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 {linzihen,nght,kanmy}@comp.nus.edu.sg Abstract We present a novel model to represent and assess the discourse coherence of text. Our model assumes that coherent text implicitly favors certain types of discourse relation transitions. We implement this model and apply it towards the text ordering ranking task, which aims to discern an original text from a permuted ordering of its sentences. The experimental results demonstrate that our model is able to significantly outperform the state-ofthe-art coherence model by Barzilay and Lapata (2005), reducing the error rate of the previous approach by an average of 29% over three data sets against human upper bounds. We further show that our model is synergistic with the previous approach, demonstrating an error reduction of 73% when the features from both models are combined for the task. 1 Introduction The coherence of a text is usually reflected by its discourse structure and relations. In Rhetorical Structure Theory (RST), Mann and Thompson (1988) observed that certain RST relations tend to favor one of two possible canonical orderings. Some relations (e.g., Concessive and Conditional) favor arranging their satellite span before the nucleus span. In contrast, other relations (e.g., Elaboration and Evidence) usually order their nucleus before the satellite. If a text that uses non-canonical relation orderings is rewritten to use canonical orderings, it often improves text quality and coherence. This notion of preferential ordering of discourse relations is observed in natural language in general, and generalizes to other discourse frameworks aside from RST. The following example shows a Contrast relation between the two sentences. (1) [ Everyone agrees that most of the nation’s old bridges need to be repaired or replaced. ]S1 [ But there’s disagreement over how to do it. ]S2 Here the second sentence provides contrasting information to the first. If this order is violated without rewording (i.e., if the two sentences are swapped), it produces an incoherent text (Marcu, 1996). In addition to the intra-relation ordering, such preferences also extend to inter-relation ordering: (2) [ The Constitution does not expressly give the president such power. ]S1 [ However, the president does have a duty not to violate the Constitution. ]S2 [ The question is whether his only means of defense is the veto. ]S3 The second sentence above provides a contrast to the previous sentence and an explanation for the next one. This pattern of Contrast-followed-by-Cause is rather common in text (Pitler et al., 2008). Ordering the three sentences differently results in incoherent, cryptic text. Thus coherent text exhibits measurable preferences for specific intra- and inter-discourse relation ordering. Our key idea is to use the converse of this phenomenon to assess the coherence of a text. In this paper, we detail our model to capture the coherence of a text based on the statistical distribution of the discourse structure and relations. Our method specifically focuses on the discourse relation transitions between adjacent sentences, modeling them in a discourse role matrix. 997 Our study makes additional contributions. We implement and validate our model on three data sets, which show robust improvements over the current state-of-the-art for coherence assessment. We also provide the first assessment of the upper-bound of human performance on the standard task of distinguishing coherent from incoherent orderings. To the best our knowledge, this is also the first study in which we show output from an automatic discourse parser helps in coherence modeling. 2 Related Work The study of coherence in discourse has led to many linguistic theories, of which we only discuss algorithms that have been reduced to practice. Barzilay and Lapata (2005; 2008) proposed an entity-based model to represent and assess local textual coherence. The model is motivated by Centering Theory (Grosz et al., 1995), which states that subsequent sentences in a locally coherent text are likely to continue to focus on the same entities as in previous sentences. Barzilay and Lapata operationalized Centering Theory by creating an entity grid model to capture discourse entity transitions at the sentence-to-sentence level, and demonstrated their model’s ability to discern coherent texts from incoherent ones. Barzilay and Lee (2004) proposed a domain-dependent HMM model to capture topic shift in a text, where topics are represented by hidden states and sentences are observations. The global coherence of a text can then be summarized by the overall probability of topic shift from the first sentence to the last. Following these two directions, Soricut and Marcu (2006) and Elsner et al. (2007) combined the entity-based and HMM-based models and demonstrated that these two models are complementary to each other in coherence assessment. Our approach differs from these models in that it introduces and operationalizes another indicator of discourse coherence, by modeling a text’s discourse relation transitions. Karamanis (2007) has tried to integrate local discourse relations into the Centering-based coherence metrics for the task of information ordering, but was not able to obtain improvement over the baseline method, which is partly due to the much smaller data set and the way the discourse relation information is utilized in heuristic constraints and rules. To implement our proposal, we need to identify the text’s discourse relations. This task, discourse parsing, has been a recent focus of study in the natural language processing (NLP) community, largely enabled by the availability of large-scale discourse annotated corpora (Wellner and Pustejovsky, 2007; Elwell and Baldridge, 2008; Lin et al., 2009; Pitler et al., 2009; Pitler and Nenkova, 2009; Lin et al., 2010; Wang et al., 2010). The Penn Discourse Treebank (PDTB) (Prasad et al., 2008) is such a corpus which provides a discourse-level annotation on top of the Penn Treebank, following a predicateargument approach (Webber, 2004). Crucially, the PDTB provides annotations not only on explicit (i.e., signaled by discourse connectives such as because) discourse relations, but also implicit (i.e., inferred by readers) ones. 3 Using Discourse Relations To utilize discourse relations of a text, we first apply automatic discourse parsing on the input text. While any discourse framework, such as the Rhetorical Structure Theory (RST), could be applied in our work to encode discourse information, we have chosen to work with the Discourse Lexicalized Tree Adjoining Grammar (D-LTAG) by Webber (2004) as embodied in the PDTB, as a PDTB-styled discourse parser1 developed by Lin et al. (2010) has recently become freely available. This parser tags each explicit/implicit relation with two levels of relation types. In this work, we utilize the four PDTB Level-1 types: Temporal (Temp), Contingency (Cont), Comparison (Comp), and Expansion (Exp). This parser automatically identifies the discourse relations, labels the argument spans, and classifies the relation types, including identifying common entity and no relation (EntRel and NoRel) as types. A simple approach to directly model the connections among discourse relations is to use the sequence of discourse relation transitions. Text (2) in Section 1 can be represented by S1 Comp −→S2 Cont −→ S3, for instance, when we use Level-1 types. In such a basic approach, we can compile a distribu1http://wing.comp.nus.edu.sg/˜linzihen/ parser/ 998 tion of the n-gram discourse relation transition sequences in gold standard coherent text, and a similar one for incoherent text. For example, the above text would generate the transition bigram Comp→Cont. We can build a classifier to distinguish one from the other through learned examples or using a suitable distribution distance measure (e.g., KL Divergence). In our pilot work where we implemented such a basic model with n-gram features for relation transitions, the performance was very poor. Our analysis revealed a serious shortcoming: as the discourse relation transitions in short texts are few in number, we have very little data to base the coherence judgment on. However, when faced with even short text excerpts, humans can distinguish coherent texts from incoherent ones, as exemplified in our example texts. The basic approach also does not model the intra-relation preference. In Text (1), a Comparison (Comp) relation would be recorded between the two sentences, irregardless of whether S1 or S2 comes first. However, it is clear that the ordering of (S1 ≺S2) is more coherent. 4 A Refined Approach The central problem with the basic approach is in its sparse modeling of discourse relations. In developing an improved model, we need to better exploit the discourse parser’s output to provide more circumstantial evidence to support the system’s coherence decision. In this section, we introduce the concept of a discourse role matrix which aims to capture an expanded set of discourse relation transition patterns. We describe how to represent the coherence of a text with its discourse relations and how to transform such information into a matrix representation. We then illustrate how we use the matrix to formulate a preference ranking problem. 4.1 Discourse Role Matrix Figure 1 shows a text and its gold standard PDTB discourse relations. When a term appears in a discourse relation, the discourse role of this term is defined as the discourse relation type plus the argument span in which the term is located (i.e., the argument tag). For instance, consider the term “cananea” in the first relation. Since the relation type is a [ Japan normally depends heavily on the Highland Valley and Cananea mines as well as the Bougainville mine in Papua New Guinea. ]S1 [ Recently, Japan has been buying copper elsewhere. ]S2 [ [ But as Highland Valley and Cananea begin operating, ]C3.1 [ they are expected to resume their roles as Japan’s suppliers. ]C3.2 ]S3 [ [ According to Fred Demler, metals economist for Drexel Burnham Lambert, New York, ]C4.1 [ “Highland Valley has already started operating ]C4.2 [ and Cananea is expected to do so soon.” ]C4.3 ]S4 5 discourse relations are present in the above text: 1. Implicit Comparison between S1 as Arg1, and S2 as Arg2 2. Explicit Comparison using “but” between S2 as Arg1, and S3 as Arg2 3. Explicit Temporal using “as” within S3 (Clause C3.1 as Arg1, and C3.2 as Arg2) 4. Implicit Expansion between S3 as Arg1, and S4 as Arg2 5. Explicit Expansion using “and” within S4 (Clause C4.2 as Arg1, and C4.3 as Arg2) Figure 1: An excerpt with four contiguous sentences from wsj 0437, showing five gold standard discourse relations. “Cananea” is highlighted for illustration. S# Terms copper cananea operat depend ... S1 nil Comp.Arg1 nil Comp.Arg1 S2 Comp.Arg2 nil nil nil Comp.Arg1 S3 nil Comp.Arg2 Comp.Arg2 nil Temp.Arg1 Temp.Arg1 Exp.Arg1 Exp.Arg1 S4 nil Exp.Arg2 Exp.Arg1 nil Exp.Arg2 Table 1: Discourse role matrix fragment for Figure 1. Rows correspond to sentences, columns to stemmed terms, and cells contain extracted discourse roles. Comparison and “cananea” is found in the Arg1 span, the discourse role of “cananea” is defined as Comp.Arg1. When terms appear in different relations and/or argument spans, they obtain different discourse roles in the text. For instance, “cananea” plays a different discourse role of Temp.Arg1 in the third relation in Figure 1. In the fourth relation, since “cananea” appears in both argument spans, it has two additional discourse roles, Exp.Arg1 and 999 Exp.Arg2. The discourse role matrix thus represents the different discourse roles of the terms across the continuous text units. We use sentences as the text units, and define terms to be the stemmed forms of the open class words: nouns, verbs, adjectives, and adverbs. We formulate the discourse role matrix such that it encodes the discourse roles of the terms across adjacent sentences. Table 1 shows a fragment of the matrix representation of the text in Figure 1. Columns correspond to the extracted terms; rows, the contiguous sentences. A cell CTi,Sj then contains the set of the discourse roles of the term Ti that appears in sentence Sj. For example, the term “cananea” from S1 takes part in the first relation, so the cell Ccananea,S1 contains the role Comp.Arg1. A cell may be empty (nil, as in Ccananea,S2) or contain multiple discourse roles (as in Ccananea,S3, as “cananea” in S3 participates in the second, third, and fourth relations). Given these discourse relations, building the matrix is straightforward: we note down the relations that a term Ti from a sentence Sj participates in, and record its discourse roles in the respective cell. We hypothesize that the sequence of discourse role transitions in a coherent text provides clues that distinguish it from an incoherent text. The discourse role matrix thus provides the foundation for computing such role transitions, on a per term basis. In fact, each column of the matrix corresponds to a lexical chain (Morris and Hirst, 1991) for a particular term across the whole text. The key differences from the traditional lexical chains are that our chain nodes’ entities are simplified (they share the same stemmed form, instead being connected by WordNet relations), but are further enriched by being typed with discourse relations. We compile the set of sub-sequences of discourse role transitions for every term in the matrix. These transitions tell us how the discourse role of a term varies through the progression of the text. For instance, “cananea” functions as Comp.Arg1 in S1 and Comp.Arg2 in S3, and plays the role of Exp.Arg1 and Exp.Arg2 in S3 and S4, respectively. As we have six relation types (Temp(oral), Cont(ingency), Comp(arison), Exp(ansion), EntRel and NoRel) and two argument tags (Arg1 and Arg2) for each type, we have a total of 6 × 2 = 12 possible discourse roles, plus a nil value. We define a discourse role transition as the sub-sequence of discourse roles for a term in multiple consecutive sentences. For example, the discourse role transition of “cananea” from S1 to S2 is Comp.Arg1→nil. As a cell may contain multiple discourse roles, a transition may produce multiple sub-sequences. For example, the length 2 sub-sequences for “cananea” from S3 to S4, are Comp.Arg2→Exp.Arg2, Temp.Arg1→Exp.Arg2, and Exp.Arg1→Exp.Arg2. Each sub-sequence has a probability that can be computed from the matrix. To illustrate the calculation, suppose the matrix fragment in Table 1 is the entire discourse role matrix. Then since there are in total 25 length 2 sub-sequences and the subsequence Comp.Arg2→Exp.Arg2 has a count of two, its probability is 2/25 = 0.08. A key property of our approach is that, while discourse transitions are captured locally on a per-term basis, the probabilities of the discourse transitions are aggregated globally, across all terms. We believe that the overall distribution of discourse role transitions for a coherent text is distinguishable from that for an incoherent text. Our model captures the distributional differences of such sub-sequences in coherent and incoherent text in training to determine an unseen text’s coherence. To evaluate the coherence of a text, we extract sub-sequences with various lengths from the discourse role matrix as features2 and compute the sub-sequence probabilities as the feature values. To further refine the computation of the subsequence distribution, we follow (Barzilay and Lapata, 2005) and divide the matrix into a salient matrix and a non-salient matrix. Terms (columns) with a frequency greater than a threshold form the salient matrix, while the rest form the non-salient matrix. The sub-sequence distributions are then calculated separately for these two matrices. 4.2 Preference Ranking While some texts can be said to be simply coherent or incoherent, often it is a matter of degree. A text can be less coherent when compared to one text, but more coherent when compared to another. As such, since the notion of coherence is relative, we feel that coherence assessment is better represented as 2Sub-sequences consisting of only nil values are not used as features. 1000 a ranking problem rather than a classification problem. Given a pair of texts, the system ranks them based on how coherent they are. Applications of such a system include differentiating a text from its permutation (i.e., the sentence ordering of the text is shuffled) and identifying a more well-written essay from a pair. Such a system can easily generalize from pairwise ranking into listwise, suitable for the ordinal ranking of a set of texts. Coherence scoring equations can also be deduced (Lapata and Barzilay, 2005) from such a model, yielding coherence scores. To induce a model for preference ranking, we use the SVMlight package3 by (Joachims, 1999) with the preference ranking configuration for training and testing. All parameters are set to their default values. 5 Experiments We evaluate our coherence model on the task of text ordering ranking, a standard coherence evaluation task used in both (Barzilay and Lapata, 2005) and (Elsner et al., 2007). In this task, the system is asked to decide which of two texts is more coherent. The pair of texts consists of a source text and one of its permutations (i.e., the text’s sentence order is randomized). Assuming that the original text is always more discourse-coherent than its permutation, an ideal system will prefer the original to the permuted text. A system’s accuracy is thus the number of times the system correctly chooses the original divided by the total number of test pairs. In order to acquire a large data set for training and testing, we follow the approach in (Barzilay and Lapata, 2005) to create a collection of synthetic data from Wall Street Journal (WSJ) articles in the Penn Treebank. All of the WSJ articles are randomly split into a training and a testing set; 40 articles are held out from the training set for development. For each article, its sentences are permuted up to 20 times to create a set of permutations4. Each permutation is paired with its source text to form a pair. We also evaluate on two other data collections (cf. Table 2), provided by (Barzilay and Lapata, 2005), for a direct comparison with their entitybased model. These two data sets consist of Associated Press articles about earthquakes from the North 3http://svmlight.joachims.org/ 4Short articles may produce less than 20 permutations. WSJ Earthquakes Accidents Train # Articles 1040 97 100 # Pairs 19120 1862 1996 Avg. # Sents 22.0 10.4 11.5 Test # Articles 1079 99 100 # Pairs 19896 1956 1986 Table 2: Details of the WSJ, Earthquakes, and Accidents data sets, showing the number of training/testing articles, number of pairs of articles, and average length of an article (in sentences). American News Corpus, and narratives from the National Transportation Safety Board. These collections are much smaller than the WSJ data, as each training/testing set contains only up to 100 source articles. Similar to the WSJ data, we construct pairs by permuting each source article up to 20 times. Our model has two parameters: (1) the term frequency (TF) that is used as a threshold to identify salient terms, and (2) the lengths of the subsequences that are extracted as features. These parameters are tuned on the development set, and the best ones that produce the optimal accuracy are TF >= 2 and lengths of the sub-sequences <= 3. We must also be careful in using the automatic discourse parser. We note that the discourse parser of Lin et al. (2010) comes trained on the PDTB, which provides annotations on top of the whole WSJ data. As we also use the WSJ data for evaluation, we must avoid parsing an article that has already been used in training the parser to prevent training on the test data. We re-train the parser with 24 WSJ sections and use the trained parser to parse the sentences in our WSJ collection from the remaining section. We repeat this re-training/parsing process for all 25 sections. Because the Earthquakes and Accidents data do not overlap with the WSJ training data, we use the parser as distributed to parse these two data sets. Since the discourse parser utilizes paragraph boundaries but a permuted text does not have such boundaries, we ignore paragraph boundaries and treat the source text as if it has only one paragraph. This is to make sure that we do not give the system extra information because of this difference between the source and permuted text. 1001 5.1 Human Evaluation While the text ordering ranking task has been used in previous studies, two key questions about this task have remained unaddressed in the previous work: (1) to what extent is the assumption that the source text is more coherent than its permutation correct? and (2) how well do humans perform on this task? The answer to the first is needed to validate the correctness of this synthetic task, while the second aims to obtain the upper bound for evaluation. We conduct a human evaluation to answer these questions. We randomly select 50 source text/permutation pairs from each of the WSJ, Earthquakes, and Accidents training sets. We observe that some of the source texts have formulaic structures in their initial sentences that give away the correct ordering. Sources from the Earthquakes data always begin with a headline sentence and a location-newswire sentence, and many sources from the Accidents data start with two sentences of “This is preliminary . . . errors. Any errors . . . completed.” We remove these sentences from the source and permuted texts, to avoid the subjects judging based on these clues instead of textual coherence. For each set of 50 pairs, we assigned two human subjects (who are not authors of this paper) to perform the ranking. The subjects are told to identify the source text from the pair. When both subjects rank a source text higher than its permutation, we interpret it as the subjects agreeing that the source text is more coherent than the permutation. Table 3 shows the inter-subject agreements. WSJ Earthquakes Accidents Overall 90.0 90.0 94.0 91.3 Table 3: Inter-subject agreements on the three data sets. While our study is limited and only indicative, we conclude from these results that the task is tractable. Also, since our subjects’ judgments correlate highly with the gold standard, the assumption that the original text is always more coherent than the permuted text is supported. Importantly though, human performance is not perfect, suggesting fair upper bound limits on system performance. We note that the Accidents data set is relatively easier to rank, as it has a higher upper bound than the other two. 5.2 Baseline Barzilay and Lapata (2005) showed that their entitybased model is able to distinguish a source text from its permutation accurately. Thus, it can serve as a good comparison point for our discourse relationbased model. We compare against their Syntax+Salience setting. Since they did not automatically determine the coreferential information of a permuted text but obtained that from its corresponding source text, we do not perform automatic coreference resolution in our reimplementation of their system. For fair comparison, we follow their experiment settings as closely as possible. We re-use their Earthquakes and Accidents dataset as is, using their exact permutations and pre-processing. For the WSJ data, we need to perform our own pre-processing, thus we employed the Stanford parser5 to perform sentence segmentation and constituent parsing, followed by entity extraction. 5.3 Results We perform a series of experiments to answer the following four questions: 1. Does our model outperform the baseline? 2. How do the different features derived from using relation types, argument tags, and salience information affect performance? 3. Can the combination of the baseline and our model outperform the single models? 4. How does system performance of these models compare with human performance on the task? Baseline results are shown in the first row of Table 4. The results on the Earthquakes and Accidents data are quite similar to those published in (Barzilay and Lapata, 2005) (they reported 83.4% on Earthquakes and 89.7% on Accidents), validating the correctness of our reimplementation of their method. Row 2 in Table 4 shows the overall performance of the proposed refined model, answering Question 1. The model setting of Type+Arg+Sal means that the model makes use of the discourse roles consisting of 1) relation types and 2) argument tags (e.g., 5http://nlp.stanford.edu/software/ lex-parser.shtml 1002 WSJ Earthquakes Accidents Baseline 85.71 83.59 89.93 Type+Arg+Sal 88.06** 86.50** 89.38 Arg+Sal 88.28** 85.89* 87.06 Type+Sal 87.06** 82.98 86.05 Type+Arg 85.98 82.67 87.87 Baseline & 89.25** 89.72** 91.64** Type+Arg+Sal Table 4: Test set ranking accuracy. The first row shows the baseline performance, the next four show our model with different settings, and the last row is a combined model. Double (**) and single (*) asterisks indicate that the respective model significantly outperforms the baseline at p < 0.01 and p < 0.05, respectively. We follow Barzilay and Lapata (2008) and use the Fisher Sign test. the discourse role Comp.Arg2 consists of the type Comp(arison) and the tag Arg2), and 3) two distinct feature sets from salient and non-salient terms. Comparing these accuracies to the baseline, our model significantly outperforms the baseline with p < 0.01 in the WSJ and Earthquakes data sets with accuracy increments of 2.35% and 2.91%, respectively. In Accidents, our model’s performance is slightly lower than the baseline, but the difference is not statistically significant. To answer Question 2, we perform feature ablation testing. We eliminate each of the information sources from the full model. In Row 3, we first delete relation types from the discourse roles, which causes discourse roles to only contain the argument tags. A discourse role such as Comp.Arg2 becomes Arg2 after deleting the relation type. Comparing Row 3 to Row 2, we see performance reductions on the Earthquakes and Accidents data after eliminating type information. Row 4 measures the effect of omitting argument tags (Type+Sal). In this setting, the discourse role Comp.Arg2 reduces to Comp. We see a large reduction in performance across all three data sets. This model is also most similar to the basic na¨ıve model in Section 3. These results suggest that the argument tag information plays an important role in our discourse role transition model. Row 5 omits the salience information (Type+Arg), which also markedly reduces performance. This result supports the use of salience, in line with the conclusion drawn in (Barzilay and Lapata, 2005). To answer Question 3, we train and test a combined model using features from both the baseline and our model (shown as Row 6 in Table 4). The entity-based model of Barzilay and Lapata (2005) connects the local entity transition with textual coherence, while our model looks at the patterns of discourse relation transitions. As these two models focus on different aspects of coherence, we expect that they are complementary to each other. The combined model in all three data sets gives the highest performance in comparison to all single models, and it significantly outperforms the baseline model with p < 0.01. This confirms that the combined model is linguistically richer than the single models as it integrates different information together, and the entitybased model and our model are synergistic. To answer Question 4, when compared to the human upper bound (Table 3), the performance gaps for the baseline model are relatively large, while those for our full model are more acceptable in the WSJ and Earthquakes data. For the combined model, the error rates are significantly reduced in all three data sets. The average error rate reductions against 100% are 9.57% for the full model and 26.37% for the combined model. If we compute the average error rate reductions against the human upper bounds (rather than an oracular 100%), the average error rate reduction for the full model is 29% and that for the combined model is 73%. While these are only indicative results, they do highlight the significant gains that our model is making towards reaching human performance levels. We further note that some of the permuted texts may read as coherently as the original text. This phenomenon has been observed in several natural language synthesis tasks such as generation and summarization, in which a single gold standard is inadequate to fully assess performance. As such, both automated systems and humans may actually perform better than our performance measures indicate. We leave it to future work to measure the impact of this phenomenon. 6 Analysis and Discussion When we compare the accuracies of the full model in the three data sets (Row 2), the accuracy in the Accidents data is the highest (89.38%), followed by 1003 that in the WSJ (88.06%), with Earthquakes at the lowest (86.50%). To explain the variation, we examine the ratio between the number of the relations in the article and the article length (i.e., number of sentences). This ratio is 1.22 for the Accidents source articles, 1.2 for the WSJ, and 1.08 for Earthquakes. The relation/length ratio gives us an idea of how often a sentence participates in discourse relations. A high ratio means that the article is densely interconnected by discourse relations, and may make distinguishing this article from its permutation easier compared to that for a loosely connected article. We expect that when a text contains more discourse relation types (i.e., Temporal, Contingency, Comparison, Expansion) and less EntRel and NoRel types, it is easier to compute how coherent this text is. This is because compared to EntRel and NoRel, these four discourse relations can combine to produce meaningful transitions, such as the example Text (2). To examine how this affects performance, we calculate the average ratio between the number of the four discourse relations in the permuted text and the length for the permuted text. The ratio is 0.58 for those that are correctly ranked by our system, and 0.48 for those that are incorrectly ranked, which supports our hypothesis. We also examined the learning curves for our Type+Arg+Sal model, the baseline model, and the combined model on the data sets, as shown in Figure 2(a)–2(c). In the WSJ data, the accuracies for all three models increase rapidly as more pairs are added to the training set. After 2,000 pairs, the increase slows until 8,000 pairs, after which the curve is nearly flat. From the curves, our model consistently performs better than the baseline with a significant gap, and the combined model also consistently and significantly outperforms the other two. Only about half of the total training data is needed to reach optimal performance for all three models. The learning curves in the Earthquakes data show that the performance for all models is always increasing as more training pairs are utilized. The Type+Arg+Sal and combined models start with lower accuracies than the baseline, but catch up with it at 1,000 and 400 pairs, respectively, and consistently outperform the baseline beyond this point. On the other hand, the learning curves for the Type+Arg+Sal and baseline models in Accidents do not show any one curve con 55 60 65 70 75 80 85 90 0 4000 8000 12000 16000 20000 Accuracy (%) Number of pairs in training data Combined Type+Arg+Sal Baseline (a) WSJ 55 60 65 70 75 80 85 90 0 400 800 1200 1600 2000 Accuracy (%) Number of pairs in training data Combined Type+Arg+Sal Baseline (b) Earthquakes 55 60 65 70 75 80 85 90 0 400 800 1200 1600 2000 Accuracy (%) Number of pairs in training data Combined Type+Arg+Sal Baseline (c) Accidents Figure 2: Learning curves for the Type+Arg+Sal, the baseline, and the combined models on the three data sets. sistently better than the other: our model outperforms in the middle segment but underperforms in the first and last segments. The curve for the combined model shows a consistently significant gap between it and the other two curves after the point at 400 pairs. With the performance of the model as it is, how can future work improve upon it? We point out one weakness that we plan to explore. We use the full Type+Arg+Sal model trained on the WSJ training 1004 data to test Text (2) from the introduction. As (2) has 3 sentences, permuting it gives rise to 5 permutations. The model is able to correctly rank four of these 5 pairs. The only permutation it fails on is (S3 ≺ S1 ≺ S2), when the last sentence is moved to the beginning. A very good clue of coherence in Text (2) is the explicit Comp relation between S1 and S2. Since this clue is retained in (S3 ≺S1 ≺S2), it is difficult for the system to distinguish this ordering from the source. In contrast, as this clue is not present in the other four permutations, it is easier to distinguish them as incoherent. By modeling longer range discourse relation transitions, we may be able to discern these two cases. While performance on identifying explicit discourse relations in the PDTB is as high as 93% (Pitler et al., 2008), identifying implicit ones has been shown to be a difficult task with accuracy of 40% at Level-2 types (Lin et al., 2009). As the overall performance of the PDTB parser is still less accurate than we hope it to be, we expect that our proposed model will give better performance than it does now, when the current PDTB parser performance is improved. 7 Conclusion We have proposed a new model for discourse coherence that leverages the observation that coherent texts preferentially follow certain discourse structures. We posit that these structures can be captured in and represented by the patterns of discourse relation transitions. We first demonstrate that simply using the sequence of discourse relation transition leads to sparse features and is insufficient to distinguish coherent from incoherent text. To address this, our method transforms the discourse relation transitions into a discourse role matrix. The matrix schematically represents term occurrences in text units and associates each occurrence with its discourse roles in the text units. In our approach, n-gram sub-sequences of transitions per term in the discourse role matrix then constitute the more finegrained evidence used in our model to distinguish coherence from incoherence. When applied to distinguish a source text from a sentence-reordered permutation, our model significantly outperforms the previous state-of-the-art, the entity-based local coherence model. While the entity-based model captures repetitive mentions of entities, our discourse relation-based model gleans its evidence from the argumentative and discourse structure of the text. Our model is complementary to the entity-based model, as it tackles the same problem from a different perspective. Experiments validate our claim, with a combined model outperforming both single models. The idea of modeling coherence with discourse relations and formulating it in a discourse role matrix can also be applied to other NLP tasks. We plan to apply our methodology to other tasks, such as summarization, text generation and essay scoring, which also need to produce and assess discourse coherence. References Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: an entity-based approach. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), pages 141–148, Morristown, NJ, USA. Association for Computational Linguistics. Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34:1–34, March. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of the Human Language Technology Conference / North American Chapter of the Association for Computational Linguistics Annual Meeting 2004. Micha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of the Conference on Human Language Technology and North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2007), Rochester, New York, USA, April. Robert Elwell and Jason Baldridge. 2008. Discourse connective argument identification with connective specific rankers. In Proceedings of the IEEE International Conference on Semantic Computing (ICSC 2010), Washington, DC, USA. Barbara J. Grosz, Scott Weinstein, and Aravind K. Joshi. 1995. Centering: a framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225, June. Thorsten Joachims. 1999. Making large-scale support vector machine learning practical. In Bernhard 1005 Schlkopf, Christopher J. C. Burges, and Alexander J. Smola, editors, Advances in Kernel Methods – Support Vector Learning, pages 169–184. MIT Press, Cambridge, MA, USA. Nikiforos Karamanis. 2007. Supplementing entity coherence with local rhetorical relations for information ordering. Journal of Logic, Language and Information, 16:445–464, October. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In Leslie Pack Kaelbling and Alessandro Saffiotti, editors, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, UK. Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP 2009), Singapore. Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2010. A PDTB-styled end-to-end discourse parser. Technical Report TRB8/10, School of Computing, National University of Singapore, August. William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 8(3):243–281. Daniel Marcu. 1996. Distinguishing between coherent and incoherent texts. In The Proceedings of the Student Conference on Computational Linguistics in Montreal, pages 136–143. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21– 48, March. Emily Pitler and Ani Nenkova. 2009. Using syntax to disambiguate explicit discourse connectives in text. In Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, Singapore. Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Easily identifiable discourse relations. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008) Short Papers, Manchester, UK. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP 2009), Singapore. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, and Bonnie Webber. 2008. The Penn Discourse Treebank 2.0. In Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of the COLING/ACL Main Conference Poster Sessions, pages 803–810, Morristown, NJ, USA. Association for Computational Linguistics. WenTing Wang, Jian Su, and Chew Lim Tan. 2010. Kernel based discourse relation recognition with temporal ordering information. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), Uppsala, Sweden, July. Bonnie Webber. 2004. D-LTAG: Extending lexicalized TAG to discourse. Cognitive Science, 28(5):751–779. Ben Wellner and James Pustejovsky. 2007. Automatically identifying the arguments of discourse connectives. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007), Prague, Czech Republic. 1006
2011
100
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1007–1017, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Underspecifying and Predicting Voice for Surface Realisation Ranking Sina Zarrieß, Aoife Cahill and Jonas Kuhn Institut f¨ur maschinelle Sprachverarbeitung Universit¨at Stuttgart, Germany {sina.zarriess,aoife.cahill,jonas.kuhn}@ims.uni-stuttgart.de Abstract This paper addresses a data-driven surface realisation model based on a large-scale reversible grammar of German. We investigate the relationship between the surface realisation performance and the character of the input to generation, i.e. its degree of underspecification. We extend a syntactic surface realisation system, which can be trained to choose among word order variants, such that the candidate set includes active and passive variants. This allows us to study the interaction of voice and word order alternations in realistic German corpus data. We show that with an appropriately underspecified input, a linguistically informed realisation model trained to regenerate strings from the underlying semantic representation achieves 91.5% accuracy (over a baseline of 82.5%) in the prediction of the original voice. 1 Introduction This paper1 presents work on modelling the usage of voice and word order alternations in a free word order language. Given a set of meaning-equivalent candidate sentences, such as in the simplified English Example (1), our model makes predictions about which candidate sentence is most appropriate or natural given the context. (1) Context: The Parliament started the debate about the state budget in April. a. It wasn’t until June that the Parliament approved it. b. It wasn’t until June that it was approved by the Parliament. c. It wasn’t until June that it was approved. We address the problem of predicting the usage of linguistic alternations in the framework of a surface 1This work has been supported by the Deutsche Forschungsgemeinschaft (DFG; German Research Foundation) in SFB 732 Incremental specification in context, project D2 (PIs: Jonas Kuhn and Christian Rohrer). realisation ranking system. Such ranking systems are practically relevant for the real-world application of grammar-based generators that usually generate several grammatical surface sentences from a given abstract input, e.g. (Velldal and Oepen, 2006). Moreover, this framework allows for detailed experimental studies of the interaction of specific linguistic features. Thus it has been demonstrated that for free word order languages like German, word order prediction quality can be improved with carefully designed, linguistically informed models capturing information-structural strategies (Filippova and Strube, 2007; Cahill and Riester, 2009). This paper is situated in the same framework, using rich linguistic representations over corpus data for machine learning of realisation ranking. However, we go beyond the task of finding the correct ordering for an almost fixed set of word forms. Quite obviously, word order is only one of the means at a speaker’s disposal for expressing some content in a contextually appropriate form; we add systematic alternations like the voice alternation (active vs. passive) to the picture. As an alternative way of promoting or demoting the prominence of a syntactic argument, its interaction with word ordering strategies in real corpus data is of high theoretical interest (Aissen, 1999; Aissen, 2003; Bresnan et al., 2001). Our main goals are (i) to establish a corpus-based surface realisation framework for empirically investigating interactions of voice and word order in German, (ii) to design an input representation for generation capturing voice alternations in a variety of contexts, (iii) to better understand the relationship between the performance of a generation ranking model and the type of realisation candidates available in its input. In working towards these goals, this paper addresses the question of evaluation. We conduct a pilot human evaluation on the voice al1007 ternation data and relate our findings to our results established in the automatic ranking experiments. Addressing interactions among a range of grammatical and discourse phenomena on realistic corpus data turns out to be a major methodological challenge for data-driven surface realisation. The set of candidate realisations available for ranking will influence the findings, and here, existing surface realisers vary considerably. Belz et al. (2010) point out the differences across approaches in the type of syntactic and semantic information present and absent in the input representation; and it is the type of underspecification that determines the number (and character) of available candidate realisations and, hence, the complexity of the realisation task. We study the effect of varying degrees of underspecification explicitly, extending a syntactic generation system by a semantic component capturing voice alternations. In regeneration studies involving underspecified underlying representations, corpusoriented work reveals an additional methodological challenge. When using standard semantic representations, as common in broad-coverage work in semantic parsing (i.e., from the point of view of analysis), alternative variants for sentence realisation will often receive slightly different representations: In the context of (1), the continuation (1-c) is presumably more natural than (1-b), but with a standard sentence-bounded semantic analysis, only (1-a) and (1-b) would receive equivalent representations. Rather than waiting for the availability of robust and reliable techniques for detecting the reference of implicit arguments in analysis (or for contextually aware reasoning components), we adopt a relatively simple heuristic approach (see Section 3.1) that approximates the desired equivalences by augmented representations for examples like (1-c). This way we can overcome an extremely skewed distribution in the naturally occurring meaning-equivalent active vs. passive sentences, a factor which we believe justifies taking the risk of occasional overgeneration. The paper is structured as follows: Section 2 situates our methodology with respect to other work on surface realisation and briefly summarises the relevant theoretical linguistic background. In Section 3, we present our generation architecture and the design of the input representation. Section 4 describes the setup for the experiments in Section 5. In Section 6, we present the results from the human evaluation. 2 Related Work 2.1 Generation Background The first widely known data-driven approach to surface realisation, or tactical generation, (Langkilde and Knight, 1998) used language-model ngram statistics on a word lattice of candidate realisations to guide a ranker. Subsequent work explored ways of exploiting linguistically annotated data for trainable generation models (Ratnaparkhi, 2000; Marciniak and Strube, 2005; Belz, 2005, a.o.). Work on data-driven approaches has led to insights into the importance of linguistic features for sentence linearisation decisions (Ringger et al., 2004; Filippova and Strube, 2009). The availability of discriminative learning techniques for the ranking of candidate analyses output by broad-coverage grammars with rich linguistic representations, originally in parsing (Riezler et al., 2000; Riezler et al., 2002), has also led to a revival of interest in linguistically sophisticated reversible grammars as the basis for surface realisation (Velldal and Oepen, 2006; Cahill et al., 2007). The grammar generates candidate analyses for an underlying representation and the ranker’s task is to predict the contextually appropriate realisation. The work that is most closely related to ours is Velldal (2008). He uses an MRS representation derived by an HPSG grammar that can be underspecified for information status. In his case, the underspecification is encoded in the grammar and not directly controlled. In multilingually oriented linearisation work, Bohnet et al. (2010) generate from semantic corpus annotations included in the CoNLL’09 shared task data. However, they note that these annotations are not suitable for full generation since they are often incomplete. Thus, it is not clear to which degree these annotations are actually underspecified for certain paraphrases. 2.2 Linguistic Background In competition-based linguistic theories (Optimality Theory and related frameworks), the use of argument alternations is construed as an effect of markedness hierarchies (Aissen, 1999; Aissen, 2003). Argument functions (subject, object, . . . ) on 1008 the one hand and the various properties that argument phrases can bear (person, animacy, definiteness) on the other are organised in markedness hierarchies. Wherever possible, there is a tendency to align the hierarchies, i.e., use prominent functions to realise prominently marked argument phrases. For instance, Bresnan et al. (2001) find that there is a statistical tendency in English to passivise a verb if the patient is higher on the person scale than the agent, but an active is grammatically possible. Bresnan et al. (2007) correlate the use of the English dative alternation to a number of features such as givenness, pronominalisation, definiteness, constituent length, animacy of the involved verb arguments. These features are assumed to reflect the discourse acessibility of the arguments. Interestingly, the properties that have been used to model argument alternations in strict word order languages like English have been identified as factors that influence word order in free word order languages like German, see Filippova and Strube (2007) for a number of pointers. Cahill and Riester (2009) implement a model for German word order variation that approximates the information status of constituents through morphological features like definiteness, pronominalisation etc. We are not aware of any corpus-based generation studies investigating how these properties relate to argument alternations in free word order languages. 3 Generation Architecture Our data-driven methodology for investigating factors relevant to surface realisation uses a regeneration set-up2 with two main components: a) a grammar-based component used to parse a corpus sentence and map it to all its meaning-equivalent surface realisations, b) a statistical ranking component used to select the correct, i.e. contextually most appropriate surface realisation. Two variants of this set-up that we use are sketched in Figure 1. We generally use a hand-crafted, broad-coverage LFG for German (Rohrer and Forst, 2006) to parse a corpus sentence into a f(unctional) structure3 and generate all surface realisations from a given 2Compare the bidirectional competition set-up in some Optimality-Theoretic work, e.g., (Kuhn, 2003). 3The choice among alternative f-structures is done with a discriminative model (Forst, 2007). Sntx SVM Ranker Snta1 Snta2 ... Sntam LFG grammar FSa LFG grammar Snti Snty SVM Ranker Sntb1 Snta1 Snta2... Sntbn LFG Grammar FSa FSb Reverse Sem. Rules SEM Sem. Rules FS1 LFG Grammar Snti Figure 1: Generation pipelines f-structure, following the generation approach of Cahill et al. (2007). F-structures are attributevalue matrices representing grammatical functions and morphosyntactic features; their theoretical motivation lies in the abstraction over details of surface realisation. The grammar is implemented in the XLE framework (Crouch et al., 2006), which allows for reversible use of the same declarative grammar in the parsing and generation direction. To obtain a more abstract underlying representation (in the pipeline on the right-hand side of Figure 1), the present work uses an additional semantic construction component (Crouch and King, 2006; Zarrieß, 2009) to map LFG f-structures to meaning representations. For the reverse direction, the meaning representations are mapped to f-structures which can then be mapped to surface strings by the XLE generator (Zarrieß and Kuhn, 2010). For the final realisation ranking step in both pipelines, we used SVMrank, a Support Vector Machine-based learning tool (Joachims, 1996). The ranking step is thus technically independent from the LFG-based component. However, the grammar is used to produce the training data, pairs of corpus sentences and the possible alternations. The two pipelines allow us to vary the degree to which the generation input is underspecified. An fstructure abstracts away from word order, i.e. the candidate set will contain just word order alternations. In the semantic input, syntactic function and voice are underspecified, so a larger set of surface realisation candidates is generated. Figure 2 illustrates the two representation levels for an active and 1009 a passive sentence. The subject of the passive and the object of the active f-structure are mapped to the same role (patient) in the meaning representation. 3.1 Issues with “naive” underspecification In order to create an underspecified voice representation that does indeed leave open the realisation options available to the speaker/writer, it is often not sufficient to remove just the syntactic function information. For instance, the subject of the active sentence (2) is an arbitrary reference pronoun man “one” which cannot be used as an oblique agent in a passive, sentence (2-b) is ungrammatical. (2) a. Man One hat has den the Kanzler chancellor gesehen. seen. b. *Der The Kanzler chancellor wurde was von by man one gesehen. seen. So, when combined with the grammar, the meaning representation for (2) in Figure 2 contains implicit information about the voice of the original corpus sentence; the candidate set will not include any passive realisations. However, a passive realisation without the oblique agent in the by-phrase, as in Example (3), is a very natural variant. (3) Der The Kanzler chancellor wurde was gesehen. seen. The reverse situation arises frequently too: passive sentences where the agent role is not overtly realised. Given the standard, “analysis-oriented” meaning representation for Sentence (4) in Figure 2, the realiser will not generate an active realisation since the agent role cannot be instantiated by any phrase in the grammar. However, depending on the exact context there are typically options for realising the subject phrase in an active with very little descriptive content. Ideally, one would like to account for these phenomena in a meaning representation that underspecifies the lexicalisation of discourse referents, and also captures the reference of implicit arguments. Especially the latter task has hardly been addressed in NLP applications (but see Gerber and Chai (2010)). In order to work around that problem, we implemented some simple heuristics which underspecify the realisation of certain verb arguments. These rules define: 1. a set of pronouns (generic and neutral pronouns, universal quantifiers) that correspond to “trivial” agents in active and implicit agents Active Passive 2-role trans. 71% (82%) 10% (2%) 1-role trans. 11% (0%) 8% (16%) Table 1: Distribution of voices in SEMh (SEMn) in passive sentences; 2. a set of prepositional adjuncts in passive sentences that correspond to subjects in active sentence (e.g. causative and instrumental prepositions like durch “by means of”); 3. certain syntactic contexts where special underspecification devices are needed, e.g. coordinations or embeddings, see Zarrieß and Kuhn (2010) for examples. In the following, we will distinguish 1-role transitives where the agent is “trivial” or implicit from 2-role transitives with a non-implicit agent. By means of the extended underspecification rules for voice, the sentences in (2) and (3) receive an identical meaning representation. As a result, our surface realiser can produce an active alternation for (3) and a passive alternation for (2). In the following, we will refer to the extended representations as SEMh (“heuristic semantics”), and to the original representations as SEMn (“naive semantics”). We are aware of the fact that these approximations introduce some noise into the data and do not always represent the underlying referents correctly. For instance, the implicit agent in a passive need not be “trivial” but can correspond to an actual discourse referent. However, we consider these heuristics as a first step towards capturing an important discourse function of the passive alternation, namely the deletion of the agent role. If we did not treat the passives with an implicit agent on a par with certain actives, we would have to ignore a major portion of the passives occurring in corpus data. Table 1 summarises the distribution of the voices for the heuristic meaning representation SEMh on the data-set we will introduce in Section 4, with the distribution for the naive representation SEMn in parentheses. 4 Experimental Set-up Data To obtain a sizable set of realistic corpus examples for our experiments on voice alternations, we created our own dataset of input sentences and representations, instead of building on treebank examples as Cahill et al. (2007) do. We extracted 19,905 sentences, all containing at least one transitive verb, 1010 f-structure Example (2) 2 66664 PRED ′see < (↑SUBJ)(↑OBJ) >′ SUBJ ˆ PRED ′one′ ˜ OBJ ˆ PRED ′chancellor′ ˜ TOPIC ˆ ′one′ ˜ PASS − 3 77775 f-structure Example (3) 2 664 PRED ′see < NULL (↑SUBJ) >′ SUBJ ˆ PRED ′chancellor′ ˜ TOPIC ˆ ′chancellor′ ˜ PASS + 3 775 semantics Example (2) HEAD (see) PAST (see) ROLE (agent,see,one) ROLE (patient,see,chancellor) semantics Example (3) HEAD (see) PAST (see) ROLE (agent,see,implicit) ROLE (patient,see,chancellor) Figure 2: F-structure pair for passive-active alternation from the HGC, a huge German corpus of newspaper text (204.5 million tokens). The sentences are automatically parsed with the German LFG grammar. The resulting f-structure parses are transferred to meaning representations and mapped back to fstructure charts. For our generation experiments, we only use those f-structure charts that the XLE generator can map back to a set of surface realisations. This results in a total of 1236 test sentences and 8044 sentences in our training set. The data loss is mostly due to the fact the XLE generator often fails on incomplete parses, and on very long sentences. Nevertheless, the average sentence length (17.28) and number of surface realisations (see Table 2) are higher than in Cahill et al. (2007). Labelling For the training of our ranking model, we have to tell the learner how closely each surface realisation candidate resembles the original corpus sentence. We distinguish the rank categories: “1” identical to the corpus string, “2” identical to the corpus string ignoring punctuation, “3” small edit distance (< 4) to the corpus string ignoring punctuation, “4” different from the corpus sentence. In one of our experiments (Section 5.1), we used the rank category “5” to explicitly label the surface realisations derived from the alternation f-structure that does not correspond to the parse of the original corpus sentence. The intermediate rank categories “2” and “3” are useful since the grammar does not always regenerate the exact corpus string, see Cahill et al. (2007) for explanation. Features The linguistic theories sketched in Section 2.2 correlate morphological, syntactic and semantic properties of constituents (or discourse referents) with their order and argument realisation. In our system, this correlation is modelled by a combination of linguistic properties that can be extracted from the f-structure or meaning representation and of the surface order that is read off the sentence string. Standard n-gram features are also used as features.4 The feature model is built as follows: for every lemma in the f-structure, we extract a set of morphological properties (definiteness, person, pronominal status etc.), the voice of the verbal head, its syntactic and semantic role, and a set of informations status features following Cahill and Riester (2009). These properties are combined in two ways: a) Precedence features: relative order of properties in the surface string, e.g. “theme < agent in passive”, “1st person < 3rd person”; b) “scale alignment” features (ScalAl.): combinations of voice and role properties with morphological properties, e.g. “subject is singular”, “agent is 3rd person in active voice” (these are surface-independent, identical for each alternation candidate). The model for which we present our results is based on sentence-internal features only; as Cahill and Riester (2009) showed, these feature carry a considerable amount of implicit information about the discourse context (e.g. in the shape of referring expressions). We also implemented a set of explicitly inter-sentential features, inspired by Centering Theory (Grosz et al., 1995). This model did not improve over the intra-sentential model. Evaluation Measures In order to assess the general quality of our generation ranking models, we 4The language model is trained on the German data release for the 2009 ACL Workshop on Machine Translation shared task, 11,991,277 total sentences. 1011 FS SEMn SEMh Avg. # strings 36.7 68.2 75.8 Random Match 16.98 10.72 7.28 LM Match 15.45 15.04 11.89 BLEU 0.68 0.68 0.65 NIST 13.01 12.95 12.69 Ling. Model Match 27.91 27.66 26.38 BLEU 0.764 0.759 0.747 NIST 13.18 13.14 13.01 Table 2: Evaluation of Experiment 1 use several standard measures: a) exact match: how often does the model select the original corpus sentence, b) BLEU: n-gram overlap between top-ranked and original sentence, c) NIST: modification of BLEU giving more weight to less frequent n-grams. Second, we are interested in the model’s performance wrt. specific linguistic criteria. We report the following accuracies: d) Voice: how often does the model select a sentence realising the correct voice, e) Precedence: how often does the model generate the right order of the verb arguments (agent and patient), and f) Vorfeld: how often does the model correctly predict the verb arguments to appear in the sentence initial position before the finite verb, the so-called Vorfeld. See Sections 5.3 and 6 for a discussion of these measures. 5 Experiments 5.1 Exp. 1: Effect of Underspecified Input We investigate the effect of the input’s underspecification on a state-of-the-art surface realisation ranking model. This model implements the entire feature set described in Section 4 (it is further analysed in the subsequent experiments). We built 3 datasets from our alternation data: FS - candidates generated from the f-structure; SEMn - realisations from the naive meaning representations; SEMh - candidates from the heuristically underspecified meaning representation. Thus, we keep the set of original corpus sentences (=the target realisations) constant, but train and test the model on different candidate sets. In Table 2, we compare the performance of the linguistically informed model described in Section 4 on the candidates sets against a random choice and a language model (LM) baseline. The differences in BLEU between the candidate sets and models are FS SEMn SEMh SEMn∗ All Trans. Voice Acc. 100 98.06 91.05 97.59 Voice Spec. 100 22.8 0 0 Majority BL 82.4 98.1 2-role Trans. Voice Acc. 100 97.7 91.8 97.59 Voice Spec. 100 8.33 0 0 Majority BL 88.5 98.1 1-role Trans. Voice Acc. 100 100 90.0 Voice Spec. 100 100 0 Majority BL 53.9 Table 3: Accuracy of Voice Prediction by Ling. Model in Experiment 1 statistically significant.5 In general, the linguistic model largely outperforms the LM and is less sensitive to the additional confusion introduced by the SEMh input. Its BLEU score and match accuracy decrease only slightly (though statistically significantly). In Table 3, we report the performance of the linguistic model on the different candidate sets with respect to voice accuracy. Since the candidate sets differ in the proportion of items that underspecify the voice (see “Voice Spec.” in Table 3), we also report the accuracy on the SEMn∗test set, which is a subset of SEMn excluding the items where the voice is specified. Table 3 shows that the proportion of active realisations for the SEMn∗input is very high, and the model does not outperform the majority baseline (which always selects active). In contrast, the SEMh model clearly outperforms the majority baseline. Example (4) is a case from our development set where the SEMn model incorrectly predicts an active (4-a), and the SEMh correctly predicts a passive (4-b). (4) a. 26 26 kostspielige expensive Studien studies erw¨ahnten mentioned die the Finanzierung. funding. b. Die The Finanzierung funding wurde was von by 26 26 kostspieligen expensive Studien studies erw¨ahnt. mentioned. This prediction is according to the markedness hierarchy: the patient is singular and definite, the agent 5According to a bootstrap resampling test, p < 0.05 1012 Features Match BLEU Voice Prec. VF Prec. 16.3 0.70 88.43 64.1 59.1 ScalAl. 10.4 0.64 90.37 58.9 56.3 Union 26.4 0.75 91.50 80.2 70.9 Table 4: Evaluation of Experiment 2 is plural and indefinite. Counterexamples are possible, but there is a clear statistical preference – which the model was able to pick up. On the one hand, the rankers can cope surprisingly well with the additional realisations obtained from the meaning representations. According to the global sentence overlap measures, their quality is not seriously impaired. On the other hand, the design of the representations has a substantial effect on the prediction of the alternations. The SEMn does not seem to learn certain preferences because of the extremely imbalanced distribution in the input data. This confirms the hypothesis sketched in Section 3.1, according to which the degree of the input’s underspecification can crucially change the behaviour of the ranking model. 5.2 Exp. 2: Word Order and Voice We examine the impact of certain feature types on the prediction of the variation types in our data. We are particularly interested in the interaction of voice and word order (precedence) since linguistic theories (see Section 2.2) predict similar informationstructural factors guiding their use, but usually do not consider them in conjunction. In Table 4, we report the performance of ranking models trained on the different feature subsets introduced in Section 4. The union of the features corresponds to the model trained on SEMh in Experiment 1. At a very broad level, the results suggest that the precedence and the scale alignment features interact both in the prediction of voice and word order. The most pronounced effect on voice accuracy can be seen when comparing the precedence model to the union model. Adding the surface-independent scale alignment features to the precedence features leads to a big improvement in the prediction of word order. This is not a trivial observation since a) the surface-independent features do not discriminate between the word orders and b) the precedence features are built from the same properties (see Section 4). Thus, the SVM learner discovers dependencies between relative precedence preferences and abstract properties of a verb argument which cannot be encoded in the precedence alone. It is worth noting that the precedence features improve the voice prediction. This indicates that wherever the application context allows it, voice should not be specified at a stage prior to word order. Example (5) is taken from our development set, illustrating a case where the union model predicted the correct voice and word order (5-a), and the scale alignment model top-ranked the incorrect voice and word order. The active verb arguments in (5-b) are both case-ambigous and placed in the non-canonical order (object < subject), so the semantic relation can be easily misunderstood. The passive in (5-a) is unambiguous since the agent is realised in a PP (and placed in the Vorfeld). (5) a. Von By den the deutschen German Medien media wurden were die the Ausl¨ander foreigners nur only erw¨ahnt, mentioned, wenn when es there Zoff trouble gab. was. b. Wenn When es there Zoff trouble gab, was, erw¨ahnten mentioned die the Ausl¨ander foreigners nur only die the deutschen German Medien. media. Moreover, our results confirm Filippova and Strube (2007) who find that it is harder to predict the correct Vorfeld occupant in a German sentence, than to predict the relative order of the constituents. 5.3 Exp. 3: Capturing Flexible Variation The previous experiment has shown that there is a certain inter-dependence between word order and voice. This experiment addresses this interaction by varying the way the training data for the ranker is labelled. We contrast two ways of labelling the sentences (see Section 4): a) all sentences that are not (nearly) identical to the reference sentence have the rank category “4”, irrespective of their voice (referred to as unlabelled model), b) the sentences that do not realise the correct voice are ranked lower than sentences with the correct voice (“4” vs. “5”), referred to as labelled model. Intuitively, the latter way of labelling tells the ranker that all sentences in the incorrect voice are worse than all sentences in the correct voice, independent of the word order. Given the first labelling strategy, the ranker can decide in an unsupervised way which combinations of word order and voice are to be preferred. 1013 Top 1 Top 1 Top 1 Top 2 Top 3 Model Match BLEU NIST Voice Prec. Prec.+Voice Prec.+Voice Prec.+Voice Labelled, no LM 21.52 0.73 12.93 91.9 76.25 71.01 78.35 82.31 Unlabelled, no LM 26.83 0.75 13.01 91.5 80.19 74.51 84.28 88.59 Unlabeled + LM 27.35 0.75 13.08 91.5 79.6 73.92 79.74 82.89 Table 5: Evaluation of Experiment 3 In Table 5, it can be seen that the unlabelled model improves over the labelled on all the sentence overlap measures. The improvements are statistically significant. Moreover, we compare the n-best accuracies achieved by the models for the joint prediction of voice and argument order. The unlabelled model is very flexible with respect to the word order-voice interaction: the accuracy dramatically improves when looking at the top 3 sentences. Table 5 also reports the performance of an unlabelled model that additionally integrates LM scores. Surprisingly, these scores have a very small positive effect on the sentence overlap features and no positive effect on the voice and precedence accuracy. The n-best evaluations even suggest that the LM scores negatively impact the ranker: the accuracy for the top 3 sentences increases much less as compared to the model that does not integrate LM scores.6 The n-best performance of a realisation ranker is practically relevant for re-ranking applications such as Velldal (2008). We think that it is also conceptually interesting. Previous evaluation studies suggest that the original corpus sentence is not always the only optimal realisation of a given linguistic input (Cahill and Forst, 2010; Belz and Kow, 2010). Humans seem to have varying preferences for word order contrasts in certain contexts. The n-best evaluation could reflect the behaviour of a ranking model with respect to the range of variations encountered in real discourse. The pilot human evaluation in the next Section deals with this question. 6 Human Evaluation Our experiment in Section 5.3 has shown that the accuracy of our linguistically informed ranking model dramatically increases when we consider the three 6(Nakanishi et al., 2005) also note a negative effect of including LM scores in their model, pointing out that the LM was not trained on enough data. The corpus used for training our LM might also have been too small or distinct in genre. best sentences rather than only the top-ranked sentence. This means that the model sometimes predicts almost equal naturalness for different voice realisations. Moreover, in the case of word order, we know from previous evaluation studies, that humans sometimes prefer different realisations than the original corpus sentences. This Section investigates agreement in human judgements of voice realisation. Whereas previous studies in generation mainly used human evaluation to compare different systems, or to correlate human and automatic evaluations, our primary interest is the agreement or correlation between human rankings. In particular, we explore the hypothesis that this agreement is higher in certain contexts than in others. In order to select these contexts, we use the predictions made by our ranking model. The questionnaire for our experiment comprised 24 items falling into 3 classes: a) items where the 3 best sentences predicted by the model have the same voice as the original sentence (“Correct”), b) items where the 3 top-ranked sentences realise different voices (“Mixed”), c) items where the model predicted the incorrect voice in all 3 top sentences (“False”). Each item is composed of the original sentence, the 3 top-ranked sentences (if not identical to the corpus sentence) and 2 further sentences such that each item contains different voices. For each item, we presented the previous context sentence. The experiment was completed by 8 participants, all native speakers of German, 5 had a linguistic background. The participants were asked to rank each sentence on a scale from 1-6 according to its naturalness and plausibility in the given context. The participants were explicitly allowed to use the same rank for sentences they find equally natural. The participants made heavy use of this option: out of the 192 annotated items, only 8 are ranked such that no two sentences have the same rank. We compare the human judgements by correlat1014 ing them with Spearman’s ρ. This measure is considered appropriate for graded annotation tasks in general (Erk and McCarthy, 2009), and has also been used for analysing human realisation rankings (Velldal, 2008; Cahill and Forst, 2010). We normalise the ranks according to the procedure in Velldal (2008). In Table 6, we report the correlations obtained from averaging over all pairwise correlations between the participants and the correlations restricted to the item and sentence classes. We used bootstrap re-sampling on the pairwise correlations to test that the correlations on the different item classes significantly differ from each other. The correlations in Table 6 suggest that the agreement between annotators is highest on the false items, and lowest on the mixed items. Humans tended to give the best rank to the original sentence more often on the false items (91%) than on the others. Moreover, the agreement is generally higher on the sentences realising the correct voice. These results seem to confirm our hypothesis that the general level of agreement between humans differs depending on the context. However, one has to be careful in relating the effects in our data solely to voice preferences. Since the sentences were chosen automatically, some examples contain very unnatural word orders that probably guided the annotators’ decisions more than the voice. This is illustrated by Example (6) showing two passive sentences from our questionnaire which differ only in the position of the adverb besser “better”. Sentence (6-a) is completely implausible for a native speaker of German, whereas Sentence (6-b) sounds very natural. (6) a. Durch By das the neue new Gesetz law sollen should besser better Eigenheimbesitzer house owners gesch¨utzt protected werden. be. b. Durch By das the neue new Gesetz law sollen should Eigenheimbesitzer house owners besser better gesch¨utzt protected werden. be. This observation brings us back to our initial point that the surface realisation task is especially challenging due to the interaction of a range of semantic and discourse phenomena. Obviously, this interaction makes it difficult to single out preferences for a specific alternation type. Future work will have to establish how this problem should be dealt with in Items All Correct Mixed False “All” sent. 0.58 0.6 0.54 0.62 “Correct” sent. 0.64 0.63 0.56 0.72 “False” sent. 0.47 0.57 0.48 0.44 Top-ranked corpus sent. 84% 78% 83% 91% Table 6: Human Evaluation the design of human evaluation experiments. 7 Conclusion We have presented a grammar-based generation architecture which implements the surface realisation of meaning representations abstracting from voice and word order. In order to be able to study voice alternations in a variety of contexts, we designed heuristic underspecification rules which establish, for instance, the alternation relation between an active with a generic agent and a passive that does not overtly realise the agent. This strategy leads to a better balanced distribution of the alternations in the training data, such that our linguistically informed generation ranking model achieves high BLEU scores and accurately predicts active and passive. In future work, we will extend our experiments to a wider range of alternations and try to capture inter-sentential context more explicitly. Moreover, it would be interesting to carry over our methodology to a purely statistical linearisation system where the relation between an input representation and a set of candidate realisations is not so clearly defined as in a grammar-based system. Our study also addressed the interaction of different linguistic variation types, i.e. word order and voice, by looking at different types of linguistic features and exploring different ways of labelling the training data. However, our SVM-based learning framework is not well-suited to directly assess the correlation between a certain feature (or feature combination) and the occurrence of an alternation. Therefore, it would be interesting to relate our work to the techniques used in theoretical papers, e.g. (Bresnan et al., 2007), where these correlations are analysed more directly. 1015 References Judith Aissen. 1999. Markedness and subject choice in optimality theory. Natural Language and Linguistic Theory, 17(4):673–711. Judith Aissen. 2003. Differential Object Marking: Iconicity vs. Economy. Natural Language and Linguistic Theory, 21:435–483. Anja Belz and Eric Kow. 2010. Comparing rating scales and preference judgements in language evaluation. In Proceedings of the 6th International Natural Language Generation Conference (INLG’10). Anja Belz, Mike White, Josef van Genabith, Deirdre Hogan, and Amanda Stent. 2010. Finding common ground: Towards a surface realisation shared task. In Proceedings of the 6th International Natural Language Generation Conference (INLG’10). Anja Belz. 2005. Statistical generation: Three methods compared and evaluated. In Proceedings of Tenth European Workshop on Natural Language Generation (ENLG-05), pages 15–23. Bernd Bohnet, Leo Wanner, Simon Mill, and Alicia Burga. 2010. Broad coverage multilingual deep sentence generation with a stochastic multi-level realizer. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010), Beijing, China. Joan Bresnan, Shipra Dingare, and Christopher D. Manning. 2001. Soft Constraints Mirror Hard Constraints: Voice and Person in English and Lummi. In Proceedings of the LFG ’01 Conference. Joan Bresnan, Anna Cueni, Tatiana Nikitina, and Harald Baayen. 2007. Predicting the Dative Alternation. In G. Boume, I. Kraemer, and J. Zwarts, editors, Cognitive Foundations of Interpretation. Amsterdam: Royal Netherlands Academy of Science. Aoife Cahill and Martin Forst. 2010. Human Evaluation of a German Surface Realisation Ranker. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 112 – 120, Athens, Greece. Association for Computational Linguistics. Aoife Cahill and Arndt Riester. 2009. Incorporating Information Status into Generation Ranking. In Proceedings of the 47th Annual Meeting of the ACL, pages 817–825, Suntec, Singapore, August. Association for Computational Linguistics. Aoife Cahill, Martin Forst, and Christian Rohrer. 2007. Stochastic realisation ranking for a free word order language. In Proceedings of the Eleventh European Workshop on Natural Language Generation, pages 17–24, Saarbr¨ucken, Germany, June. DFKI GmbH. Document D-07-01. Dick Crouch and Tracy Holloway King. 2006. Semantics via F-Structure Rewriting. In Miriam Butt and Tracy Holloway King, editors, Proceedings of the LFG06 Conference. Dick Crouch, Mary Dalrymple, Ron Kaplan, Tracy King, John Maxwell, and Paula Newman. 2006. XLE Documentation. Technical report, Palo Alto Research Center, CA. Katrin Erk and Diana McCarthy. 2009. Graded Word Sense Assignment. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 440 – 449, Singapore. Katja Filippova and Michael Strube. 2007. Generating constituent order in German clauses. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 07), Prague, Czech Republic. Katja Filippova and Michael Strube. 2009. Tree linearization in English: Improving language model based approaches. In Companion Volume to the Proceedings of Human Language Technologies Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 09, short)., Boulder, Colorado. Martin Forst. 2007. Filling Statistics with Linguistics – Property Design for the Disambiguation of German LFG Parses. In ACL 2007 Workshop on Deep Linguistic Processing, pages 17–24, Prague, Czech Republic, June. Association for Computational Linguistics. Matthew Gerber and Joyce Chai. 2010. Beyond nombank: A study of implicit argumentation for nominal predicates. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD). Barbara J. Grosz, Aravind Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. Thorsten Joachims. 1996. Training linear svms in linear time. In M. Butt and T. H. King, editors, Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), CSLI Proceedings Online. Jonas Kuhn. 2003. Optimality-Theoretic Syntax—A Declarative Approach. CSLI Publications, Stanford, CA. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of the ACL/COLING-98, pages 704–710, Montreal, Quebec. Tomasz Marciniak and Michael Strube. 2005. Using an annotated corpus as a knowledge source for language generation. In Proceedings of Workshop on Using Corpora for Natural Language Generation, pages 19–24, Birmingham, UK. Hiroko Nakanishi, Yusuke Miyao, and Junichi Tsujii. 2005. Probabilistic models for disambiguation of an 1016 HPSG-based chart generator. In Proceedings of IWPT 2005. Adwait Ratnaparkhi. 2000. Trainable methods for surface natural language generation. In Proceedings of NAACL 2000, pages 194–201, Seattle, WA. Stefan Riezler, Detlef Prescher, Jonas Kuhn, and Mark Johnson. 2000. Lexicalized stochastic modeling of constraint-based grammars using log-linear measures and EM training. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL’00), Hong Kong, pages 480–487. Stefan Riezler, Dick Crouch, Ron Kaplan, Tracy King, John Maxwell, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL’02), Pennsylvania, Philadelphia. Eric K. Ringger, Michael Gamon, Robert C. Moore, David Rojas, Martine Smets, and Simon CorstonOliver. 2004. Linguistically Informed Statistical Models of Constituent Structure for Ordering in Sentence Realization. In Proceedings of the 2004 International Conference on Computational Linguistics, Geneva, Switzerland. Christian Rohrer and Martin Forst. 2006. Improving coverage and parsing quality of a large-scale LFG for German. In Proceedings of LREC-2006. Erik Velldal and Stephan Oepen. 2006. Statistical ranking in tactical generation. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, Sydney, Australia. Erik Velldal. 2008. Empirical Realization Ranking. Ph.D. thesis, University of Oslo, Department of Informatics. Sina Zarrieß and Jonas Kuhn. 2010. Reversing Fstructure Rewriting for Generation from Meaning Representations. In Proceedings of the LFG10 Conference, Ottawa. Sina Zarrieß. 2009. Developing German Semantics on the basis of Parallel LFG Grammars. In Proceedings of the 2009 Workshop on Grammar Engineering Across Frameworks (GEAF 2009), pages 10–18, Suntec, Singapore, August. Association for Computational Linguistics. 1017
2011
101
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1018–1026, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model Elijah Mayfield Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Carolyn Penstein Ros´e Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract We present a novel computational formulation of speaker authority in discourse. This notion, which focuses on how speakers position themselves relative to each other in discourse, is first developed into a reliable coding scheme (0.71 agreement between human annotators). We also provide a computational model for automatically annotating text using this coding scheme, using supervised learning enhanced by constraints implemented with Integer Linear Programming. We show that this constrained model’s analyses of speaker authority correlates very strongly with expert human judgments (r2 coefficient of 0.947). 1 Introduction In this work, we seek to formalize the ways speakers position themselves in discourse. We do this in a way that maintains a notion of discourse structure, and which can be aggregated to evaluate a speaker’s overall stance in a dialogue. We define the body of work in positioning to include any attempt to formalize the processes by which speakers attempt to influence or give evidence of their relations to each other. Constructs such as Initiative and Control (Whittaker and Stenton, 1988), which attempt to operationalize the authority over a discourse’s structure, fall under the umbrella of positioning. As we construe positioning, it also includes work on detecting certainty and confusion in speech (Liscombe et al., 2005), which models a speaker’s understanding of the information in their statements. Work in dialogue act tagging is also relevant, as it seeks to describe the actions and moves with which speakers display these types of positioning (Stolcke et al., 2000). To complement these bodies of work, we choose to focus on the question of how speakers position themselves as authoritative in a discourse. This means that we must describe the way speakers introduce new topics or discussions into the discourse; the way they position themselves relative to that topic; and how these functions interact with each other. While all of the tasks mentioned above focus on specific problems in the larger rhetorical question of speaker positioning, none explicitly address this framing of authority. Each does have valuable ties to the work that we would like to do, and in section 2, we describe prior work in each of those areas, and elaborate on how each relates to our questions. We measure this as an authoritativeness ratio. Of the contentful dialogue moves made by a speaker, in what fraction of those moves is the speaker positioned as the primary authority on that topic? To measure this quantitatively, we introduce the Negotiation framework, a construct from the field of systemic functional linguistics (SFL), which addresses specifically the concepts that we are interested in. We present a reproducible formulation of this sociolinguistics research in section 3, along with our preliminary findings on reliability between human coders, where we observe inter-rater agreement of 0.71. Applying this coding scheme to data, we see strong correlations with important motivational constructs such as Self-Efficacy (Bandura, 1997) as well as learning gains. Next, we address automatic coding of the Negotiation framework, which we treat as a two1018 dimensional classification task. One dimension is a set of codes describing the authoritative status of a contribution1. The other dimension is a segmentation task. We impose constraints on both of these models based on the structure observed in the work of SFL. These constraints are formulated as boolean statements describing what a correct label sequence looks like, and are imposed on our model using an Integer Linear Programming formulation (Roth and Yih, 2004). In section 5, this model is evaluated on a subset of the MapTask corpus (Anderson et al., 1991) and shows a high correlation with human judgements of authoritativeness (r2 = 0.947). After a detailed error analysis, we will conclude the paper in section 6 with a discussion of our future work. 2 Background The Negotiation framework, as formulated by the SFL community, places a special emphasis on how speakers function in a discourse as sources or recipients of information or action. We break down this concept into a set of codes, one code per contribution. Before we break down the coding scheme more concretely in section 3, it is important to understand why we have chosen to introduce a new framework, rather than reusing existing computational work. Much work has examined the emergence of discourse structure from the choices speakers make at the linguistic and intentional level (Grosz and Sidner, 1986). For instance, when a speaker asks a question, it is expected to be followed with an answer. In discourse analysis, this notion is described through dialogue games (Carlson, 1983), while conversation analysis frames the structure in terms of adjacency pairs (Schegloff, 2007). These expectations can be viewed under the umbrella of conditional relevance (Levinson, 2000), and the exchanges can be labelled discourse segments. In prior work, the way that people influence discourse structure is described through the two tightlyrelated concepts of initiative and control. A speaker who begins a discourse segment is said to have initiative, while control accounts for which speaker is being addressed in a dialogue (Whittaker and Stenton, 1988). As initiative passes back and forth between discourse participants, control over the con1We treat each line in our corpus as a single contribution. versation similarly transfers from one speaker to another (Walker and Whittaker, 1990). This relation is often considered synchronous, though evidence suggests that the reality is not straightforward (Jordan and Di Eugenio, 1997). Research in initiative and control has been applied in the form of mixed-initiative dialogue systems (Smith, 1992). This is a large and active field, with applications in tutorial dialogues (Core, 2003), human-robot interactions (Peltason and Wrede, 2010), and more general approaches to effective turn-taking (Selfridge and Heeman, 2010). However, that body of work focuses on influencing discourse structure through positioning. The question that we are asking instead focuses on how speakers view their authority as a source of information about the topic of the discourse. In particular, consider questioning in discourse. In mixed-initiative analysis of discourse, asking a question always gives you control of a discourse. There is an expectation that your question will be followed by an answer. A speaker might already know the answer to a question they asked - for instance, when a teacher is verifying a student’s knowledge. However, in most cases asking a question represents a lack of authority, treating the other speakers as a source for that knowledge. While there have been preliminary attempts to separate out these specific types of positioning in initiative, such as Chu-Carroll and Brown (1998), it has not been studied extensively in a computational setting. Another similar thread of research is to identify a speaker’s certainty, that is, the confidence of a speaker and how that self-evaluation affects their language (Pon-Barry and Shieber, 2010). Substantial work has gone into automatically identifying levels of speaker certainty, for example in Liscombe et al. (2005) and Litman et al. (2009). The major difference between our work and this body of literature is that work on certainty has rarely focused on how state translates into interaction between speakers (with some exceptions, such as the application of certainty to tutoring dialogues (Forbes-Riley and Litman, 2009)). Instead, the focus is on the person’s self-evaluation, independent of the influence on the speaker’s positioning within a discourse. Dialogue act tagging seeks to describe the moves people make to express themselves in a discourse. 1019 This task involves defining the role of each contribution based on its function (Stolcke et al., 2000). We know that there are interesting correlations between these acts and other factors, such as learning gains (Litman and Forbes-Riley, 2006) and the relevance of a contribution for summarization (Wrede and Shriberg, 2003). However, adapting dialogue act tags to the question of how speakers position themselves is not straightforward. In particular, the granularity of these tagsets, which is already a highly debated topic (Popescu-Belis, 2008), is not ideal for the task we have set for ourselves. Many dialogue acts can be used in authoritative or nonauthoritative ways, based on context, and can position a speaker as either giver or receiver of information. Thus these more general tagsets are not specific enough to the role of authority in discourse. Each of these fields of prior work is highly valuable. However, none were designed to specifically describe how people present themselves as a source or recipient of knowledge in a discourse. Thus, we have chosen to draw on a different field of sociolinguistics. Our formalization of that theory is described in the next section. 3 The Negotiation Framework We now present the Negotiation framework2, which we use to answer the questions left unanswered in the previous section. Within the field of SFL, this framework has been continually refined over the last three decades (Berry, 1981; Martin, 1992; Martin, 2003). It attempts to describe how speakers use their role as a source of knowledge or action to position themselves relative to others in a discourse. Applications of the framework include distinguishing between focus on teacher knowledge and student reasoning (Veel, 1999) and distribution of authority in juvenile trials (Martin et al., 2008). The framework can also be applied to problems similar to those studied through the lens of initiative, such as the distinction between authority over discourse structure and authority over content (Martin, 2000). A challenge of applying this work to language technologies is that it has historically been highly 2All examples are drawn from the MapTask corpus and involve an instruction giver (g) and follower (f). Within examples, discourse segment boundaries are shown by horizontal lines. qualitative, with little emphasis placed on reproducibility. We have formulated a pared-down, reproducible version of the framework, presented in Section 3.1. Evidence of the usefulness of that formulation for identifying authority, and of correlations that we can study based on these codes, is presented briefly in Section 3.2. 3.1 Our Formulation of Negotiation The codes that we can apply to a contribution using the Negotiation framework are comprised of four main codes, K1, K2, A1, and A2, and two additional codes, ch and o. This is a reduction over the many task-specific or highly contextual codes used in the original work. This was done to ensure that a machine learning classification task would not be overwhelmed with many infrequent classes. The main codes are divided by two questions. First, is the contribution related to exchanging information, or to exchanging services and actions? If the former, then it is a K move (knowledge); if the latter, then an A move (action). Second, is the contribution acting as a primary actor, or secondary? In the case of knowledge, this often correlates to the difference between assertions (K1) and queries (K2). For instance, a statement of fact or opinion is a K1: g K1 well i’ve got a great viewpoint here just below the east lake By contrast, asking for someone else’s knowledge or opinion is a K2: g K2 what have you got underneath the east lake f K1 rocket launch In the case of action, the codes usually correspond to narrating action (A1) and giving instructions (A2), as below: g A2 go almost to the edge of the lake f A1 yeah A challenge move (ch) is one which directly contradicts the content or assertion of the previous line, or makes that previous contribution irrelevant. For instance, consider the exchange below, where an instruction is rejected because its presuppositions are broken by the challenging statement. g A2 then head diagonally down towards the bottom of the dead tree f ch i have don’t have a dead tree i have a dutch elm 1020 All moves that do not fit into one of these categories are classified as other (o). This includes backchannel moves, floor-grabbing moves, false starts, and any other non-contentful contributions. This theory makes use of discourse segmentation. Research in the SFL community has focused on intra-segment structure, and empirical evidence from this research has shown that exchanges between speakers follow a very specific pattern: o* X2? o* X1+ o* That is to say, each segment contains a primary move (a K1 or an A1) and an optional preceding secondary move, with other non-contentful moves interspersed throughout. A single statement of fact would be a K1 move comprising an entire segment, while a single question/answer pair would be a K2 move followed by a K1. Longer exchanges of many lines obviously also occur. We iteratively developed a coding manual which describes, in a reproducible way, how to apply the codes listed above. The six codes we use, along with their frequency in our corpus, are given in Table 1. In the next section, we evaluate the reliability and utility of hand-coded data, before moving on to automation in section 4. 3.2 Preliminary Evaluation This coding scheme was evaluated for reliability on two corpora using Cohen’s kappa (Cohen, 1960). Within the social sciences community, a kappa above 0.7 is considered acceptable. Two conversations were each coded by hand by two trained annotators. The first conversation was between three students in a collaborative learning task; inter-rater reliability kappa for Negotiation labels was 0.78. The second conversation was from the MapTask corpus, and kappa was 0.71. Further data was labelled by hand by one trained annotator. In our work, we label conversations using the coding scheme above. To determine how well these codes correlate with other interesting factors, we choose to assign a quantitative measure of authoritativeness to each speaker. This measure can then be compared to other features of a speaker. To do this, we use the coded labels to assign an Authoritativeness Ratio to each speaker. First, we define a Code Meaning Count Percent K1 Primary Knower 984 22.5 K2 Secondary Knower 613 14.0 A1 Primary Actor 471 10.8 A2 Secondary Actor 708 16.2 ch Challenge 129 2.9 o Other 1469 33.6 Total 4374 100.0 Table 1: The six codes in our coding scheme, along with their frequency in our corpus of twenty conversations. function A(S, c, L) for a speaker, a contribution, and a set of labels L ⊆{K1, K2, A1, A2, o, ch} as: A(S, c, L) =  1 c spoken by S with label l ∈L 0 otherwise. We then define the Authoritativeness ratio Auth(S) for a speaker S in a dialogue consisting of contributions c1...cn as: Auth(S) = n X i=1 A(S, ci, {K1, A2}) n X i=1 A(S, ci, {K1, K2, A1, A2}) The intuition behind this ratio is that we are only interested in the four main label types in our analysis - at least for an initial description of authority, we do not consider the non-contentful o moves. Within these four main labels, there are clearly two that appear “dominant” - statements of fact or opinion, and commands or instructions - and two that appear less dominant - questions or requests for information, and narration of an action. We sum these together to reach a single numeric value for each speaker’s projection of authority in the dialogue. The full details of our external validations of this approach are available in Howley et al. (2011). To summarize, we considered two data sets involving student collaborative learning. The first data set consisted of pairs of students interacting over two days, and was annotated for aggressive behavior, to assess warning factors in social interactions. Our analysis 1021 showed that aggressive behavior correlated with authoritativeness ratio (p < .05), and that less aggressive students became less authoritative in the second day (p < .05, effect size .15σ). The second data set was analyzed for Self-Efficacy - the confidence of each student in their own ability (Bandura, 1997) - as well as actual learning gains based on pre- and post-test scores. We found that the Authoritativeness ratio was a significant predictor of learning gains (r2 = .41, p < .04). Furthermore, in a multiple regression, we determined that the Authoritativeness ratio of both students in a group predict the average Self-Efficacy of the pair (r2 = .12, p < .01). 4 Computational Model We know that our coding scheme is useful for making predictions about speakers. We now judge whether it can be reproduced fully automatically. Our model must select, for each contribution ci in a dialogue, the most likely classification label li from {K1, K2, A1, A2, o, ch}. We also build in parallel a segmentation model to select si from the set {new, same}. Our baseline approach to both problems is to use a bag-of-words model of the contribution, and use machine learning for classification. Certain types of interactions, explored in section 4.1, are difficult or impossible to classify without context. We build a contextual feature space, described in section 4.2, to enhance our baseline bagof-words model. We can also describe patterns that appear in discourse segments, as detailed in section 3.1. In our coding manual, these instructions are given as rules for how segments should be coded by humans. Our hypothesis is that by enforcing these rules in the output of our automatic classifier, performance will increase. In section 4.3 we formalize these constraints using Integer Linear Programming. 4.1 Challenging cases We want to distinguish between phenomena such as in the following two examples. f K2 so I’m like on the bank on the bank of the east lake g K1 yeah In this case, a one-token contribution is indisputably a K1 move, answering a yes/no question. However, in the dialogue below, it is equally inarguable that the same move is an A1: g A2 go almost to the edge of the lake f A1 yeah Without this context, these moves would be indistinguishable to a model. With it, they are both easily classified correctly. We also observed that markers for segmentation of a segment vary between contentful initiations and non-contentful ones. For instance, filler noises can often initiate segments: g o hmm... g K2 do you have a farmer’s gate? f K1 no Situations such as this are common. This is also a challenge for segmentation, as demonstrated below: g K1 oh oh it’s on the right-hand side of my great viewpoint f o okay yeah g o right eh g A2 go almost to the edge of the lake f A1 yeah A long statement or instruction from one speaker is followed up with a terse response (in the same segment) from the listener. However, after that backchannel move, a short floor-grabbing move is often made to start the next segment. This is a distinction that a bag-of-words model would have difficulty with. This is markedly different from contentful segment initiations: g A2 come directly down below the stone circle and we come up f ch I don’t have a stone circle g o you don’t have a stone circle All three of these lines look like statements, which often initiate new segments. However, only the first should be marked as starting a new segment. The other two are topically related, in the second line by contradicting the instruction, and in the third by repeating the previous person’s statement. 4.2 Contextual Feature Space Additions To incorporate the insights above into our model, we append features to our bag-of-words model. First, in our classification model we include both lexical bigrams and part-of-speech bigrams to encode further lexical knowledge and some notion of syntactic structure. To account for restatements and topic shifts, we add a feature based on cosine similarity (using term vectors weighted by TF-IDF calculated 1022 over training data). We then add a feature for the predicted label of the previous contribution - after each contribution is classified, the next contribution adds a feature for the automatic label. This requires our model to function as an on-line classifier. We build two segmentation models, one trained on contributions of less than four tokens, and another trained on contributions of four or more tokens, to distinguish between characteristics of contentful and non-contentful contributions. To the short-contribution model, we add two additional features. The first represents the ratio between the length of the current contribution and the length of the previous contribution. The second represents whether a change in speaker has occurred between the current and previous contribution. 4.3 Constraints using Integer Linear Programming We formulate our constraints using Integer Linear Programming (ILP). This formulation has an advantage over other sequence labelling formulations, such as Viterbi decoding, in its ability to enforce structure through constraints. We then enhance this classifier by adding constraints, which allow expert knowledge of discourse structure to be enforced in classification. We can use these constraints to eliminate label options which would violate the rules for a segment outlined in our coding manual. Each classification decision is made at the contribution level, jointly optimizing the Negotiation label and segmentation label for a single contribution, then treating those labels as given for the next contribution classification. To define our objective function for optimization, for each possible label, we train a one vs. all SVM, and use the resulting regression for each label as a score, giving us six values ⃗li for our Negotiation label and two values ⃗si for our segmentation label. Then, subject to the constraints below, we optimize: arg max l∈⃗li,s∈⃗si l + s Thus, at each contribution, if the highest-scoring Negotiation label breaks a constraint, the model can optimize whether to drop to the next-most-likely label, or start a new segment. Recall from section 3.1 that our discourse segments follow strict rules related to ordering and repetition of contributions. Below, we list the constraints that we used in our model to enforce that pattern, along with a brief explanation of the intuition behind each. ∀ci ∈s, (li = K2) ⇒ ∀j < i, cj ∈t ⇒(lj ̸= K1) (1) ∀ci ∈s, (li = A2) ⇒ ∀j < i, cj ∈t ⇒(lj ̸= A1) (2) The first constraints enforce the rule that a primary move cannot occur before a secondary move in the same segment. For instance, a question must initiate a new segment if it follows a statement. ∀ci ∈s, (li ∈{A1, A2}) ⇒ ∀j < i, cj ∈s ⇒(lj /∈{K1, K2}) (3) ∀ci ∈s, (li ∈{K1, K2}) ⇒ ∀j < i, cj ∈s ⇒(lj /∈{A1, A2}) (4) These constraints specify that A moves and K moves cannot cooccur in a segment. An instruction for action and a question requesting information must be considered separate segments. ∀ci ∈s, (li = A1) ⇒((li−1 = A1) ∨ ∀j < i, cj ∈s ⇒(lj ̸= A1)) (5) ∀ci ∈s, (li = K1) ⇒((li−1 = K1) ∨ ∀j < i, cj ∈s ⇒(lj ̸= K1)) (6) This pair states that two primary moves cannot occur in the same segment unless they are contiguous, in rapid succession. ∀ci ∈s, (li = A1) ⇒ ∀j < i, cj ∈s, (lj = A2) ⇒(Si ̸= Sj) (7) ∀ci ∈s, (li = K1) ⇒ ∀j < i, cj ∈s, (lj = K2) ⇒(Si ̸= Sj) (8) The last set of constraints enforce the intuitive notion that a speaker cannot follow their own secondary move with a primary move in that segment (such as answering their own question). 1023 Computationally, an advantage of these constraints is that they do not extend past the current segment in history. This means that they usually are only enforced over the past few moves, and do not enforce any global constraint over the structure of the whole dialogue. This allows the constraints to be flexible to various conversational styles, and tractable for fast computation independent of the length of the dialogue. 5 Evaluation We test our models on a twenty conversation subset of the MapTask corpus detailed in Table 1. We compare the use of four models in our results. • Baseline: This model uses a bag-of-words feature space as input to an SVM classifier. No segmentation model is used and no ILP constraints are enforced. • Baseline+ILP: This model uses the baseline feature space as input to both classification and segmentation models. ILP constraints are enforced between these models. • Contextual: This model uses our enhanced feature space from section 4.2, with no segmentation model and no ILP constraints enforced. • Contextual+ILP: This model uses the enhanced feature spaces for both Negotiation labels and segment boundaries from section 4.2 to enforce ILP constraints. For segmentation, we evaluate our models using exact-match accuracy. We use multiple evaluation metrics to judge classification. The first and most basic is accuracy - the percentage of accurately chosen Negotiation labels. Secondly, we use Cohen’s Kappa (Cohen, 1960) to judge improvement in accuracy over chance. The final evaluation is the r2 coefficient computed between predicted and actual Authoritativeness ratios per speaker. This represents how much variance in authoritativeness is accounted for in the predicted ratios. This final metric is the most important for measuring reproducibility of human analyses of speaker authority in conversation. We use SIDE for feature extraction (Mayfield and Ros´e, 2010), SVM-Light for machine learning Model Accuracy Kappa r2 Baseline 59.7% 0.465 0.354 Baseline+ILP 61.6% 0.488 0.663 Segmentation 72.3% Contextual 66.7% 0.565 0.908 Contextual+ILP 68.4% 0.584 0.947 Segmentation 74.9% Table 2: Performance evaluation for our models. Each line is significantly improved in both accuracy and r2 error from the previous line (p < .01). (Joachims, 1999), and Learning-Based Java for ILP inference (Rizzolo and Roth, 2010). Performance is evaluated by 20-fold cross-validation, where each fold is trained on 19 conversations and tested on the remaining one. Statistical significance was calculated using a student’s paired t-test. For accuracy and kappa, n = 20 (one data point per conversation) and for r2, n = 40 (one data point per speaker). 5.1 Results All classification results are given in Table 2 and charts showing correlation between predicted and actual speaker Authoritativeness ratios are shown in Figure 1. We observe that the baseline bag-of-words model performs well above random chance (kappa of 0.465); however, its accuracy is still very low and its ability to predict Authoritativeness ratio of a speaker is not particularly high (r2 of 0.354 with ratios from manually labelled data). We observe a significant improvement when ILP constraints are applied to this model. The contextual model described in section 4.2 performs better than our baseline constrained model. However, the gains found in the contextual model are somewhat orthogonal to the gains from using ILP constraints, as applying those constraints to the contextual model results in further performance gains (and a high r2 coefficient of 0.947). Our segmentation model was evaluated based on exact matches in boundaries. Switching from baseline to contextual features, we observe an improvement in accuracy of 2.6%. 5.2 Error Analysis An error analysis of model predictions explains the large effect on correlation despite relatively smaller 1024 Figure 1: Plots of predicted (x axis) and actual (y axis) Authoritativeness ratios for speakers across 20 conversations, for the Baseline (left), Baseline+Constraints (center), and Contextual+Constraints (right) models. changes in accuracy. Our Authoritativeness ratio does not take into account moves labelled o or ch. What we find is that the most advanced model still makes many mistakes at determining whether a move should be labelled as o or a core move. This error rate is, however, fairly consistent across the four core move codes. When a move is determined (correctly) to not be an o move, the system is highly accurate in distinguishing between the four core labels. The one systematic confusion that continues to appear most frequently in our results is the inability to distinguish between a segment containing an A2 move followed by an A1 move, and a segment containing a K1 move followed by an o move. The surface structure of these types of exchanges is very similar. Consider the following two exchanges: g A2 if you come down almost to the bottom of the map that I’ve got f A1 uh-huh f K1 but the meadow’s below my broken gate g o right yes These two exchanges on a surface level are highly similar. Out of context, making this distinction is very hard even for human coders, so it is not surprising then that this pattern is the most difficult one to recognize in this corpus. It contributes most of the remaining confusion between the four core codes. 6 Conclusions In this work we have presented one formulation of authority in dialogue. This formulation allows us to describe positioning in discourse in a way that is complementary to prior work in mixed-initiative dialogue systems and analysis of speaker certainty. Our model includes a simple understanding of discourse structure while also encoding information about the types of moves used, and the certainty of a speaker as a source of information. This formulation is reproducible by human coders, with an inter-rater reliability of 0.71. We have then presented a computational model for automatically applying these codes per contribution. In our best model, we see a good 68.4% accuracy on a six-way individual contribution labelling task. More importantly, this model replicates human analyses of authoritativeness very well, with an r2 coefficient of 0.947. There is room for improvement in our model in future work. Further use of contextual features will more thoroughly represent the information we want our model to take into account. Our segmentation accuracy is also fairly low, and further examination of segmentation accuracy using a more sophisticated evaluation metric, such as WindowDiff (Pevzner and Hearst, 2002), would be helpful. In general, however, we now have an automated model that is reliable in reproducing human judgments of authoritativeness. We are now interested in how we can apply this to the larger questions of positioning we began this paper by asking, especially in describing speaker positioning at various instants throughout a single discourse. This will be the main thrust of our future work. Acknowledgements This research was supported by NSF grants SBE0836012 and HCC-0803482. 1025 References Anne Anderson, Miles Bader, Ellen Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, et al. 1991. The HCRC Map Task Corpus. In Language and Speech. Albert Bandura. 1997. Self-efficacy: The Exercise of Control Margaret Berry. 1981. Towards Layers of Exchange Structure for Directive Exchanges. In Network 2. Lauri Carlson. 1983. Dialogue Games: An Approach to Discourse Analysis. Jennifer Chu-Carroll and Michael Brown. 1998. An Evidential Model for Tracking Initiative in Collaborative Dialogue Interactions. In User Modeling and UserAdapted Interaction. Jacob Cohen. 1960. A Coefficient of Agreement for Nominal Scales. In Educational and Psychological Measurement. Mark Core and Johanna Moore and Claus Zinn. 2003. The Role of Initiative in Tutorial Dialogue. In Proceedings of EACL. Kate Forbes-Riley and Diane Litman. 2009. Adapting to Student Uncertainty Improves Tutoring Dialogues. In Proceedings of Artificial Intelligence in Education. Barbara Grosz and Candace Sidner. 1986. Attention, Intentions, and the Structure of Discourse. In Computational Linguistics. Iris Howley and Elijah Mayfield and Carolyn Penstein Ros´e. 2011. Missing Something? Authority in Collaborative Learning. In Proceedings of ComputerSupported Collaborative Learning. Thorsten Joachims. 1999. Making large-Scale SVM Learning Practical. In Advances in Kernel Methods - Support Vector Learning. Pamela Jordan and Barbara Di Eugenio. 1997. Control and Initiative in Collaborative Problem Solving Dialogues. In Proceedings of AAAI Spring Symposium on Computational Models for Mixed Initiative Interactions. Stephen Levinson. 2000. Pragmatics. Jackson Liscombe, Julia Hirschberg, and Jennifer Venditti. 2005. Detecting Certainness in Spoken Tutorial Dialogues. In Proceedings of Interspeech. Diane Litman and Kate Forbes-Riley. 2006. Correlations betweeen Dialogue Acts and Learning in Spoken Tutoring Dialogue. In Natural Language Engineering. Diane Litman, Mihai Rotaru, and Greg Nicholas. 2009. Classifying Turn-Level Uncertainty Using Word-Level Prosody. In Proceedings of Interspeech. James Martin. 1992. English Text: System and Structure. James Martin. 2000. Factoring out Exchange: Types of Structure. In Working with Dialogue. James Martin and David Rose. 2003. Working with Discourse: Meaning Beyond the Clause. James Martin, Michele Zappavigna, and Paul Dwyer. 2008. Negotiating Shame: Exchange and Genre Structure in Youth Justice Conferencing. In Proceedings of European Systemic Functional Linguistics. Elijah Mayfield and Carolyn Penstein Ros´e. 2010. An Interactive Tool for Supporting Error Analysis for Text Mining. In Proceedings of Demo Session at NAACL. Julia Peltason and Britta Wrede. 2010. Modeling Human-Robot Interaction Based on Generic Interaction Patterns. In AAAI Report on Dialog with Robots. Lev Pevzner and Marti Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. In Computational Linguistics. Heather Pon-Barry and Stuart Shieber. 2010. Assessing Self-awareness and Transparency when Classifying a Speakers Level of Certainty. In Speech Prosody. Andrei Popescu-Belis. 2008. Dimensionality of Dialogue Act Tagsets: An Empirical Analysis of Large Corpora. In Language Resources and Evaluation. Nick Rizzolo and Dan Roth. 2010. Learning Based Java for Rapid Development of NLP Systems. In Language Resources and Evaluation. Dan Roth and Wen-Tau Yih. 2004. A Linear Programming Formulation for Global Inference in Natural Language Tasks. In Proceedings of CoNLL. Emanuel Schegloff. 2007. Sequence Organization in Interaction: A Primer in Conversation Analysis. Ethan Selfridge and Peter Heeman. 2010. ImportanceDriven Turn-Bidding for Spoken Dialogue Systems. In Proceedings of ACL. Ronnie Smith. 1992. A computational model of expectation-driven mixed-initiative dialog processing. Ph.D. Dissertation. Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, et al. 2000. Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech. In Computational Linguistics. Robert Veel. 1999. Language, Knowledge, and Authority in School Mathematics. In Pedagogy and the Shaping of Consciousness: Linguistics and Social Processes Marilyn Walker and Steve Whittaker. 1990. Mixed Initiative in Dialogue: An Investigation into Discourse Structure. In Proceedings of ACL. Steve Whittaker and Phil Stenton. 1988. Cues and Control in Expert-Client Dialogues. In Proceedings of ACL. Britta Wrede and Elizabeth Shriberg. 2003. The Relationship between Dialogue Acts and Hot Spots in Meetings. In IEEE Workshop on Automatic Speech Recognition and Understanding. 1026
2011
102
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1027–1035, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Reordering Metrics for MT Alexandra Birch Miles Osborne [email protected] [email protected] University of Edinburgh 10 Crichton Street Edinburgh, EH8 9AB, UK Abstract One of the major challenges facing statistical machine translation is how to model differences in word order between languages. Although a great deal of research has focussed on this problem, progress is hampered by the lack of reliable metrics. Most current metrics are based on matching lexical items in the translation and the reference, and their ability to measure the quality of word order has not been demonstrated. This paper presents a novel metric, the LRscore, which explicitly measures the quality of word order by using permutation distance metrics. We show that the metric is more consistent with human judgements than other metrics, including the BLEU score. We also show that the LRscore can successfully be used as the objective function when training translation model parameters. Training with the LRscore leads to output which is preferred by humans. Moreover, the translations incur no penalty in terms of BLEU scores. 1 Introduction Research in machine translation has focused broadly on two main goals, improving word choice and improving word order in translation output. Current machine translation metrics rely upon indirect methods for measuring the quality of the word order, and their ability to capture the quality of word order is poor (Birch et al., 2010). There are currently two main approaches to evaluating reordering. The first is exemplified by the BLEU score (Papineni et al., 2002), which counts the number of matching n-grams between the reference and the hypothesis. Word order is captured by the proportion of longer n-grams which match. This method does not consider the position of matching words, and only captures ordering differences if there is an exact match between the words in the translation and the reference. Another approach is taken by two other commonly used metrics, METEOR (Banerjee and Lavie, 2005) and TER (Snover et al., 2006). They both search for an alignment between the translation and the reference, and from this they calculate a penalty based on the number of differences in order between the two sentences. When block moves are allowed the search space is very large, and matching stems and synonyms introduces errors. Importantly, none of these metrics capture the distance by which words are out of order. Also, they conflate reordering performance with the quality of the lexical items in the translation, making it difficult to tease apart the impact of changes. More sophisticated metrics, such as the RTE metric (Pad´o et al., 2009), use higher level syntactic or semantic analysis to determine the grammaticality of the output. These approaches require annotation and can be very slow to run. For most research, shallow metrics are more appropriate. We introduce a novel shallow metric, the Lexical Reordering Score (LRscore), which explicitly measures the quality of word order in machine translations and interpolates it with a lexical metric. This results in a simple, decomposable metric which makes it easy for researchers to pinpoint the effect of their changes. In this paper we show that the LRscore is more consistent with human judgements 1027 than other metrics for five out of eight different language pairs. We also apply the LRscore during Minimum Error Rate Training (MERT) to see whether information on reordering allows the translation model to produce better reorderings. We show that humans prefer the output of systems trained with the LRscore 52.5% as compared to 43.9% when training with the BLEU score. Furthermore, training with the LRscore does not result in lower BLEU scores. The rest of the paper proceeds as follows. Section 2 describes the reordering and lexical metrics that are used and how they are combined. Section 3 presents the experiments on consistency with human judgements and describes how to train the language independent parameter of the LRscore. Section 4 reports the results of the experiments on MERT. Finally we discuss related work and conclude. 2 The LRscore In this section we present the LRscore which measures reordering using permutation distance metrics. These reordering metrics have been demonstrated to correlate strongly with human judgements of word order quality (Birch et al., 2010). The LRscore combines the reordering metrics with lexical metrics to provide a complete metric for evaluating machine translations. 2.1 Reordering metrics The relative ordering of words in the source and target sentences is encoded in alignments. We can interpret alignments as permutations which allows us to apply research into metrics for ordered encodings to measuring and evaluating reorderings. We use distance metrics over permutations to evaluate reordering performance. Figure 1 shows three permutations. Each position represents a source word and each value indicates the relative positions of the aligned target words. In Figure 1 (a) represents the identity permutation, which would result from a monotone alignment, (b) represents a small reordering consisting of two words whose orders are inverted, and (c) represents a large reordering where the two halves of the sentence are inverted in the target. A translation can potentially have many valid word orderings. However, we can be reasonably certain that the ordering of the reference sentence must be acceptable. We therefore compare the ordering (a) (1 2 3 4 5 6 7 8 9 10) (b) (1 2 3 4 •6 •5 •7 8 9 10) (c) (6 7 8 9 10 •1 2 3 4 5) Figure 1. Three permutations: (a) monotone (b) with a small reordering and (b) with a large reordering. Bullet points highlight non-sequential neighbours. of a translation with that of the reference sentence. Where multiple references exist, we select the closest, i.e. the one that gives the best score. The underlying assumption is that most reasonable word orderings should be fairly similar to the reference, which is a necessary assumption for all automatic machine translation metrics. Permutations encode one-one relations, whereas alignments contain null alignments and one-many, many-one and many-many relations. We make some simplifying assumptions to allow us to work with permutations. Source words aligned to null are assigned the target word position immediately after the target word position of the previous source word. Where multiple source words are aligned to the same target word or phrase, a many-to-one relation, the target ordering is assumed to be monotone. When one source word is aligned to multiple target words, a one-to-many relation, the source word is assumed to be aligned to the first target word. These simplifications are chosen so as to reduce the alignment to a bijective relationship without introducing any extraneous reorderings, i.e. they encode a basic monotone ordering assumption. We choose permutation distance metrics which are sensitive to the number of words that are out of order, as humans are assumed to be sensitive to the number of words that are out of order in a sentence. The two permutations we refer to, π and σ, are the source-reference permutation and the sourcetranslation permutation. The metrics are normalised so that 0 means that the permutations are completely inverted, and 1 means that they are identical. We report these scores as percentages. 2.1.1 Hamming Distance The Hamming distance (Hamming, 1950) measures the number of disagreements between two permutations. It is defined as follows: dh(π, σ) = 1− Pn i=1 xi n , xi =  0 if π(i) = σ(i) 1 otherwise 1028 Eg. BLEU METEOR TER dh dk (a) 100.0 100.0 100.0 100.0 100.0 (b) 61.8 86.9 90.0 80.0 85.1 (c) 81.3 92.6 90.0 0.0 25.5 Table 1. Metric scores for examples in Figure 1 which are calculated by comparing the permutations to the identity. All metrics are adjusted so that 100 is the best score and 0 the worst. where n is the length of the permutation. The Hamming distance is the simplest permutation distance metric and is useful as a baseline. It has no concept of the relative ordering of words. 2.1.2 Kendall’s Tau Distance Kendall’s tau distance is the minimum number of transpositions of two adjacent symbols necessary to transform one permutation into another (Kendall, 1938). It represents the percentage of pairs of elements which share the same order between two permutations. It is defined as follows: dk(π, σ) = 1 − sPn i=1 Pn j=1 zij Z where zij =  1 if π(i) < π(j) and σ(i) > σ(j) 0 otherwise Z = (n2 −n) 2 Kendalls tau seems particularly appropriate for measuring word order differences as the relative ordering words is taken into account. However, most human and machine ordering differences are much closer to monotone than to inverted. The range of values of Kendall’s tau is therefore too narrow and close to 1. For this reason we take the square root of the standard metric. This adjusted dk is also more correlated with human judgements of reordering quality (Birch et al., 2010). We use the example in Figure 1 to highlight the problem with current MT metrics, and to demonstrate how the permutation distance metrics are calculated. In Table 1 we present the metric results for the example permutations. The metrics are calculated by comparing the permutation string with the monotone permutation. (a) receives the best score for all metrics as it is compared to itself. BLEU and METEOR fail to recognise that (b) represents a small reordering and (c) a large reordering and they assign a lower score to (b). The reason for this is that they are sensitive to breaks in order, but not to the actual word order differences. BLEU matches more n-grams for (c) and consequently assigns it a higher score. METEOR counts the number of blocks that the translation is broken into, in order to align it with the source. (b) is aligned using four blocks, whereas (c) is aligned using only two blocks. TER counts the number of edits, allowing for block shifts, and applies one block shift for each example, resulting in an equal score for (b) and (c). Both the Hamming distance dh and the Kendall’s tau distance dk correctly assign (c) a worse score than (b). Note that for (c), the Hamming distance was not able to reward the permutation for the correct relative ordering of words within the two large blocks and gave (c) a score of 0, whereas Kendall’s tau takes relative ordering into account. Wong and Kit (2009) also suggest a metric which combines a word choice and a word order component. They propose a type of F-measure which uses a matching function M to calculate precision and recall. M combines the number of matched words, weighted by their tfidf importance, with their position difference score, and finally subtracting a score for unmatched words. Including unmatched words in the M function undermines the interpretation of the supposed F-measure. The reordering component is the average difference of absolute and relative word positions which has no clear meaning. This score is not intuitive or easily decomposable and it is more similar to METEOR, with synonym and stem functionality mixed with a reordering penalty, than to our metric. 2.2 Combined Metric The LRscore consists of a reordering distance metric which is linearly interpolated with a lexical score to form a complete machine translation evaluation metric. The metric is decomposable because the individual lexical and reordering components can be looked at individually. The following formula describes how to calculate the LRscore: LRscore = αR + (1 −α)L (1) The metric contains only one parameter, α, which balances the contribution of the reordering metric, R, and the lexical metric, L. Here we use BLEU as 1029 the lexical metric. R is the average permutation distance metric adjusted by the brevity penalty and it is calculated as follows: R = P s∈S dsBPs |S| (2) Where S is a set of test sentences, ds is the reordering distance for a sentence and BP is the brevity penalty. The brevity penalty is calculated as: BP =  1 if t > r e1−r/t if t ≤r (3) where t is the length of the translation, and r is the closest reference length. If the reference sentence is slightly longer than the translation, then the brevity penalty will be a fraction somewhat smaller than 1. This has the effect of penalising translations that are shorter than the reference. The brevity penalty within the reordering component is necessary as the distance-based metric would provide the same score for a one word translation as it would for a longer monotone translation. R is combined with a system level lexical score. In this paper we apply the BLEU score as the lexical metric, as it is well known and it measures lexical precision at different n-gram lengths. We experiment with the full BLEU score and the 1-gram BLEU score, BLEU1, which is purely a measure of the precision of the word choice. The 4-gram BLEU score includes some measure of the local reordering success in the precision of the longer n-grams. BLEU is an important baseline, and improving on it by including more reordering information is an interesting result. The lexical component of the system can be any meaningful metric for a particular target language. If a researcher was interested in morphologically rich languages, for example, METEOR could be used. We use the LRscore to return sentence level scores as well system level scores, and when doing so the smoothed BLEU (Lin and Och, 2004) is used. 3 Consistency with Human Judgements Automatic metrics must be validated by comparing their scores with human judgements. We train the metric parameter to optimise consistency with human preference judgements across different language pairs and then we show that the LRscore is more consistent with humans than other commonly used metrics. 3.1 Experimental Design Human judgement of rank has been chosen as the official determinant of translation quality for the 2009 Workshop on Machine Translation (Callison-Burch et al., 2009). We used human ranking data from this workshop to evaluate the LRscore. This consisted of German, French, Spanish and Czech translation systems that were run both into and out of English. In total there were 52,265 pairwise rank judgements collected. Our reordering metric relies upon word alignments that are generated between the source and the reference sentences, and the source and the translated sentences. In an ideal scenario, the translation system outputs the alignments and the reference set can be selected to have gold standard human alignments. However, the data that we use to evaluate metrics does not have any gold standard alignments and we must train automatic alignment models to generate them. We used version two of the Berkeley alignment model (Liang et al., 2006), with the posterior threshold set at 0.5. Our Spanish-, French- and German-English alignment models are trained using Europarl version 5 (Koehn, 2005). The Czech-English alignment model is trained on sections 0-2 of the Czech-English Parallel Corpus, version 0.9 (Bojar and Zabokrtsky, 2009). The metric scores are calculated for the test set from the 2009 workshop on machine translation. It consists of 2525 sentences in English, French, German, Spanish and Czech. These sentences have been translated by different machine translation systems and the output submitted to the workshop. The system output along with human evaluations can be downloaded from the web1. The BLEU score has five parameters, one for each n-gram, and one for the brevity penalty. These parameters are set to a default uniform value of one. METEOR has 3 parameters which have been trained for human judgements of rank (Lavie and Agarwal, 2008). METEOR version 0.7 was used. The other baseline metric used was TER version 0.7.25. We adapt TER by subtracting it from one, so that all 1http://www.statmt.org/wmt09/results.html 1030 metric increases mean an improvement in the translation. The TER metric has five parameters which have not been trained. Using rank judgements, we do not have absolute scores and so we cannot compare translations across different sentences and extract correlation statistics. We therefore use the method adopted in the 2009 workshop on machine translation (Callison-Burch et al., 2009). We ascertained how consistent the automatic metrics were with the human judgements by calculating consistency in the following manner. We take each pairwise comparison of translation output for single sentences by a particular judge, and we recorded whether or not the metrics were consistent with the human rank. I.e. we counted cases where both the metric and the human judge agreed that one system is better than another. We divided this by the total number of pairwise comparisons to get a percentage. We excluded pairs which the human annotators ranked as ties. de-en es-en fr-en cz-en dk 73.9 80.5 80.4 81.1 Table 2. The average Kendall’s tau reordering distance between the test and reference sentences. 100 means monotone thus de-en has the most reordering. We present a novel method for setting the LRscore parameter. Using multiple language pairs, we train the parameter according to the amount of reordering seen in each test set. The advantage of this approach is that researchers do not need to train the parameter for new language pairs or test domains. They can simply calculate the amount of reordering in the test set and adjust the parameter accordingly. The amount of reordering is calculated as the Kendall’s tau distance between the source and the reference sentences as compared to dummy monotone sentences. The amount of reordering for the test sentences is reported in Table 2. GermanEnglish shows more reordering than other language pairs as it has a lower dk score of 73.9. The language independent parameter (θ) is adjusted by applying the reordering amount (dk) as an exponent. θ is allowed to takes values of between 0 and 1. This works in a similar way to the brevity penalty. With more reordering, the dk becomes smaller which leads to an increase in the final value of α. α represents the percentage contribution of the reordering component in the LRscore: α = θdk (4) The language independent parameter θ is trained once, over multiple language pairs. This procedure optimises the average of the consistency results across the different language pairs. We use greedy hillclimbing in order to find the optimal setting. As hillclimbing can end up in a local minima, we perform 20 random restarts, and retaining only the parameter value with the best consistency result. 3.2 Results Table 3 reports the optimal consistency of the LRscore and baseline metrics with human judgements for each language pair. The LRscore variations are named as follows: LR refers to the LRscore, “H” refers to the Hamming distance and “K” to Kendall’s tau distance. “B1” and “B4” refer to the smoothed BLEU score with the 1-gram and the complete scores. Table 3 shows that the LRscore is more consistent with human judgement for 5 out of the 8 language pairs. This is an important result which shows that combining lexical and reordering information makes for a stronger metric than the baseline metrics which do not have a strong reordering component. METEOR is the most consistent for the CzechEnglish and English-Czech language pairs, which have the least amount of reordering. METEOR lags behind for the language pairs with the most reordering, the German-English and English-German pairs. Here LR-KB4 is the best metric, which shows that metrics which are sensitive to the distance words are out of order are more appropriate for situations with a reasonable amount of reordering. 4 Optimising Translation Models Automatic metrics are useful for evaluation, but they are essential for training model parameters. In this section we apply the LRscore as the objective function in MERT training (Och, 2003). MERT minimises translation errors according to some automatic evaluation metric while searching for the best parameter settings over the N-best output. A MERT trained model is likely to exhibit the properties that 1031 Metric de-en es-en fr-en cz-en en-de en-es en-fr en-cz ave METEOR 58.6 58.3 58.3 59.4 52.6 55.7 61.2 55.6 57.5 TER 53.2 50.1 52.6 47.5 48.6 49.6 58.3 45.8 50.7 BLEU1 56.1 57.0 56.7 52.5 52.1 54.2 62.3 53.3 55.6 BLEU 58.7 55.5 57.7 57.2 54.1 56.7 63.7 53.1 57.1 LR-HB1 59.7 60.0 58.6 53.2 54.6 55.6 63.7 54.5 57.5 LR-HB4 60.4 57.3 58.7 57.2 54.8 57.3 63.3 53.8 57.9 LR-KB1 60.4 59.7 58.0 54.0 54.1 54.7 63.4 54.9 57.5 LR-KB4 61.0 57.2 58.5 58.6 54.8 56.8 63.1 55.0 58.7 Table 3. The percentage consistency between human judgements of rank and metrics. The LRscore variations (LR-*) are optimised for average consistency across language pair (shown in right hand column). The bold numbers represent the best consistency score per language pair. the metric rewards, but will be blind to aspects of translation quality that are not directly captured by the metric. We apply the LRscore in order to improve the reordering performance of a phrase-based translation model. 4.1 Experimental Design We hypothesise that the LRscore is a good metric for training translation models. We test this by evaluating the output of the models, first with automatic metrics, and then by using human evaluation. We choose to run the experiment with Chinese-English as this language pair has a large amount of medium and long distance reorderings. 4.1.1 Training Setup The experiments are carried out with ChineseEnglish data from GALE. We use the official test set of the 2006 NIST evaluation (1994 sentences). For the development test set, we used the evaluation set from the GALE 2008 evaluation (2010 sentences). Both development set and test set have four references. The phrase table was built from 1.727M parallel sentences from the GALE Y2 training data. The phrase-based translation model called MOSES was used, with all the default settings. We extracted phrases as in (Koehn et al., 2003) by running GIZA++ in both directions and merging alignments with the grow-diag-final heuristic. We used the Moses translation toolkit, including a lexicalised reordering model. The SRILM language modelling toolkit (Stolcke, 2002) was used with interpolated Kneser-Ney discounting. There are three separate 3gram language models trained on the English side of parallel corpus, the AFP part of the Gigaword corpus, and the Xinhua part of the Gigaword corLR-HB1 LR-HB4 LR-KB1 LR-KB4 26.40 07.19 43.33 26.23 Table 4. The parameter setting representing the % impact of the reordering component for the different versions of the LRscore metric. pus. A 4 or 5-gram language model would have led to higher scores for all objective functions, but would not have changed the findings in this paper. We used the MERT code available in the MOSES repository (Bertoldi et al., 2009). The reordering metrics require alignments which were created using the Berkeley word alignment package version 1.1 (Liang et al., 2006), with the posterior probability to being 0.5. We first extracted the LRscore Kendall’s tau distance from the monotone for the Chinese-English test set and this value was 66.1%. This is far more reordering than the other language pairs shown in Table 2. We then calculated the optimal parameter setting, using the reordering amount as a power exponent. Table 4 shows the parameter settings we used in the following experiments. The optimal amount of reordering for LR-HB4 is low, but the results show it still makes an important contribution. 4.1.2 Human Evaluation Setup Human judgements of translation quality are necessary to determine whether humans prefer sentences from models trained with the BLEU score or with the LRscore. There have been some recent studies which have used the online micro-market, Amazons Mechanical Turk, to collect human annotations (Snow et al., 2008; Callison-Burch, 2009). While some of the data generated is very noisy, invalid responses are largely due to a small number of workers (Kittur et al., 2008). We use Mechanical 1032 Turk and we improve annotation quality by collecting multiple judgements, and eliminating workers who do not achieve a certain level of performance on gold standard questions. We randomly selected a subset of sentences from the test set. We use 60 sentences each for comparing training with BLEU to training with LR-HB4 and with LR-KB4. These sentences were between 15 and 30 words long. Shorter sentences tend to have uninteresting differences, and longer sentences may have many conflicting differences. Workers were presented with a reference sentence and two translations which were randomly ordered. They were told to compare the translations and select their preferred translation or “Don’t Know”. Workers were screened to guarantee reasonable judgement quality. 20 sentence pairs were randomly selected from the 120 test units and annotated as gold standard questions. Workers who got less than 60% of these gold questions correct were disqualified and their judgements discarded. After disagreeing with a gold annotation, a worker is presented with the gold answer and an explanation. This guides the worker on how to perform the task and motivates them to be more accurate. We used the Crowdflower2 interface to Mechanical Turk, which implements the gold functionality. Even though experts can disagree on preference judgements, gold standard labels are necessary to weed out the poor standard workers. There were 21 trusted workers who achieved an average accuracy of 91% on the gold. There were 96 untrusted workers who averaged 29% accuracy on the gold. Their judgements were discarded. Three judgements were collected from the trusted workers for each of the 120 test sentences. 4.2 Results 4.2.1 Automatic Evaluation of MERT In this experiment we demonstrate that the reordering metrics can be used as learning criterion in minimum error rate training to improve parameter estimation for machine translation. Table 5 reports the average of three runs of MERT training with different objective functions. The lexical metric BLEU is used as an objective function in 2http://www.crowdflower.com Metrics PPPPP Obj.Func. BLEU LR-HB4 LR-KB4 TER MET. BLEU 31.1 32.1 41.0 60.7 55.5 LRHB4 31.1 32.2 41.3 60.6 55.7 LRKB4 31.0 32.2 41.2 61.0 55.8 Table 5. Average results of three different MERT runs for different objective functions. isolation, and also as part of the LRscore together with the Hamming distance and Kendall’s tau distance. We test with these metrics, and we also report the TER and METEOR scores for comparison. The first thing we note in Table 5 is that we would expect the highest scores when training with the same metric as that used for evaluation as MERT maximises the objective function on the development data set. Here, however, when testing with BLEU, we see that training with BLEU and with LR-HB4 leads to equally high BLEU scores. The reordering component is more discerning than the BLEU score. It reliably increases as the word order approaches that of the reference, whereas BLEU can reports the same score for a large number of different alternatives. This might make the reordering metric easier to optimise, leading to the joint best scores at test time. This is an important result, as it shows that by training with the LRscore objective function, BLEU scores do not decrease, which is desirable as BLEU scores are usually reported in the field. The LRscore also results in better scores when evaluated with itself and the other two baseline metrics, TER and METEOR. Reordering and the lexical metrics are orthogonal information sources, and this shows that combining them results in better performing systems. BLEU has shown to be a strong baseline metric to use as an objective function (Cer et al., 2010), and so the LRscore performance in Table 5 is a good result. Examining the weights that result from the different MERT runs, the only notable difference is that the weight of the distortion cost is considerably lower with the LRscore. This shows more trust in the quality of reorderings. Although it is interesting to look at the model weights, any final conclusion on the impact of the metrics on training must depend on human evaluation of translation quality. 1033 Type Sentence Reference silicon valley is still a rich area in the united states. the average salary in the area was us $62,400 a year, which was 64% higher than the american average. LR-KB4 silicon valley is still an affluent area of the united states, the regional labor with an average annual salary of 6.24 million us dollars, higher than the average level of 60 per cent. BLEU silicon valley is still in the united states in the region in an affluent area of the workforce, the average annual salary of 6.24 million us dollars, higher than the average level of 60 per cent Table 7. A reference sentence is compared with output from models trained with BLEU and with the LR-KB4 lrscore. Prefer LR Prefer BLEU Don’t Know LR-KB4 96 79 5 LR-HB4 93 79 8 Total 189 (52.5%) 158 (43.9%) 13 Table 6. The number of the times human judges preferred the output of systems trained either with the LRscore or with the BLEU score, or were unable to choose. 4.2.2 Human Evaluation We collect human preference judgements for output from systems trained using the BLEU score and the LRscore in order to determine whether training with the LRscore leads to genuine improvements in translation quality. Table 6 shows the number of the times humans preferred the LRscore or the BLEU score output, or when they did not know. We can see that humans have a greater preference for the output for systems trained with the LRscore, which is preferred 52.5% of the time, compared to the BLEU score, which was only preferred 43.9% of the time. The sign test can be used to determine whether this difference is significant. Our null hypothesis is that the probability of a human preferring the LRscore trained output is the same as that of preferring the BLEU trained output. The one-tailed alternative hypothesis is that humans prefer the LRscore output. If the null hypothesis is true, then there is only a probability of 0.048 that 189 out of 347 (189 + 158) people will select the LRscore output. We therefore discard the null hypothesis and the human preference for the output of the LRscore trained system is significant to the 95% level. In order to judge how reliable our judgements are we calculate the inter-annotator agreement. This is given by the Kappa coefficient (K), which balances agreement with expected agreement. The Kappa coefficient is 0.464 which is considered to be a moderate level of agreement. In analysis of the results, we found that output from the system trained with the LRscore tend to produce sentences with better structure. In Table 7 we see a typical example. The word order of the sentence trained with BLEU is mangled, whereas the LR-KB4 model outputs a clear translation which more closely matches the reference. It also garners higher reordering and BLEU scores. We expect that more substantial gains can be made in the future by using models which have more powerful reordering capabilities. A richer set of reordering features, and a model capable of longer distance reordering would better leverage metrics which reward good word orderings. 5 Conclusion We introduced the LRscore which combines a lexical and a reordering metric. The main motivation for this metric is the fact that it measures the reordering quality of MT output by using permutation distance metrics. It is a simple, decomposable metric which interpolates the reordering component with a lexical component, the BLEU score. This paper demonstrates that the LRscore metric is more consistent with human preference judgements of machine translation quality than other machine translation metrics. We also show that when training a phrase-based translation model with the LRscore as the objective function, the model retains its performance as measured by the baseline metrics. Crucially, however, optimisation using the LRscore improves subjective evaluation. Ultimately, the availability of a metric which reliably measures reordering performance should accelerate progress towards developing more powerful reordering models. 1034 References Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for MT evaluation with improved correlation with human judgments. In Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization. Nicola Bertoldi, Barry Haddow, and Jean-Baptiste Fouet. 2009. Improved Minimum Error Rate Training in Moses. The Prague Bulletin of Mathematical Linguistics, 91:7–16. Alexandra Birch, Phil Blunsom, and Miles Osborne. 2010. Metrics for MT Evaluation: Evaluating Reordering. Machine Translation, 24(1):15–26. Ondrej Bojar and Zdenek Zabokrtsky. 2009. CzEng0.9: Large Parallel Treebank with Rich Annotation. Prague Bulletin of Mathematical Linguistics, 92:63– 84. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Josh Schroeder. 2009. Findings of the 2009 Workshop on Statistical Machine Translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 1–28, Athens, Greece, March. Association for Computational Linguistics. Chris Callison-Burch. 2009. Fast, cheap, and creative: evaluating translation quality using Amazon’s Mechanical Turk. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 286–295, Singapore, August. Association for Computational Linguistics. Daniel Cer, Christopher D. Manning, and Daniel Jurafsky. 2010. The best lexical metric for phrase-based statistical MT system optimization. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 555–563, Los Angeles, California, June. Richard Hamming. 1950. Error detecting and error correcting codes. Bell System Technical Journal, 26(2):147–160. Maurice Kendall. 1938. A new measure of rank correlation. Biometrika, 30:81–89. A. Kittur, E. H. Chi, and B. Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, pages 453–456. ACM. Philipp Koehn, Franz Och, and Daniel Marcu. 2003. Statistical Phrase-Based translation. In Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference, pages 127–133, Edmonton, Canada. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of MTSummit. Alon Lavie and Abhaya Agarwal. 2008. Meteor, m-BLEU and m-TER: Evaluation metrics for highcorrelation with human rankings of machine translation output. In Proceedings of the Workshop on Statistical Machine Translation at the Meeting of the Association for Computational Linguistics (ACL-2008), pages 115–118. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 104–111, New York City, USA, June. Association for Computational Linguistics. Chin-Yew Lin and Franz Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of the Conference on Computational Linguistics, pages 501–507. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the Association for Computational Linguistics, pages 160– 167, Sapporo, Japan. Sebastian Pad´o, Daniel Cer, Michel Galley, Dan Jurafsky, and Christopher D. Manning. 2009. Measuring machine translation quality as semantic equivalence: A metric based on entailment features. Machine Translation, pages 181–193. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics, pages 311– 318, Philadelphia, USA. Matthew Snover, Bonnie Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas, pages 223–231. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good?: Evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 254–263. Association for Computational Linguistics. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of Spoken Language Processing, pages 901–904. Billy Wong and Chunyu Kit. 2009. ATEC: automatic evaluation of machine translation via word choice and word order. Machine Translation, 23(2-3):141–155. 1035
2011
103
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1036–1044, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Reordering with Source Language Collocations Zhanyi Liu1,2, Haifeng Wang2, Hua Wu2, Ting Liu1, Sheng Li1 1Harbin Institute of Technology, Harbin, China 2Baidu Inc., Beijing, China {liuzhanyi, wanghaifeng, wu_hua}@baidu.com {tliu, lisheng}@hit.edu.cn Abstract This paper proposes a novel reordering model for statistical machine translation (SMT) by means of modeling the translation orders of the source language collocations. The model is learned from a word-aligned bilingual corpus where the collocated words in source sentences are automatically detected. During decoding, the model is employed to softly constrain the translation orders of the source language collocations, so as to constrain the translation orders of those source phrases containing these collocated words. The experimental results show that the proposed method significantly improves the translation quality, achieving the absolute improvements of 1.1~1.4 BLEU score over the baseline methods. 1 Introduction Reordering for SMT is first proposed in IBM models (Brown et al., 1993), usually called IBM constraint model, where the movement of words during translation is modeled. Soon after, Wu (1997) proposed an ITG (Inversion Transduction Grammar) model for SMT, called ITG constraint model, where the reordering of words or phrases is constrained to two kinds: straight and inverted. In order to further improve the reordering performance, many structure-based methods are proposed, including the reordering model in hierarchical phrase-based SMT systems (Chiang, 2005) and syntax-based SMT systems (Zhang et al., 2007; Marton and Resnik, 2008; Ge, 2010; Visweswariah et al., 2010). Although the sentence structure has been taken into consideration, these methods don‟t explicitly make use of the strong correlations between words, such as collocations, which can effectively indicate reordering in the target language. In this paper, we propose a novel method to improve the reordering for SMT by estimating the reordering score of the source-language collocations (source collocations for short in this paper). Given a bilingual corpus, the collocations in the source sentence are first detected automatically using a monolingual word alignment (MWA) method without employing additional resources (Liu et al., 2009), and then the reordering model based on the detected collocations is learned from the word-aligned bilingual corpus. The source collocation based reordering model is integrated into SMT systems as an additional feature to softly constrain the translation orders of the source collocations in the sentence to be translated, so as to constrain the translation orders of those source phrases containing these collocated words. This method has two advantages: (1) it can automatically detect and leverage collocated words in a sentence, including long-distance collocated words; (2) such a reordering model can be integrated into any SMT systems without resorting to any additional resources. We implemented the proposed reordering model in a phrase-based SMT system, and the evaluation results show that our method significantly improves translation quality. As compared to the baseline systems, an absolute improvement of 1.1~1.4 BLEU score is achieved. 1036 The paper is organized as follows: In section 2, we describe the motivation to use source collocations for reordering, and briefly introduces the collocation extraction method. In section 3, we present our reordering model. And then we describe the experimental results in section 4 and 5. In section 6, we describe the related work. Lastly, we conclude in section 7. 2 Collocation A collocation is generally composed of a group of words that occur together more often than by chance. Collocations effectively reveal the strong association among words in a sentence and are widely employed in a variety of NLP tasks (Mckeown and Radey, 2000). Given two words in a collocation, they can be translated in the same order as in the source language, or in the inverted order. We name the first case as straight, and the second inverted. Based on the observation that some collocations tend to have fixed translation orders such as “金融 jin-rong „financial‟ 危机 wei-ji „crisis‟” (financial crisis) whose English translation order is usually straight, and “法律 fa-lv „law‟ 范围 fan-wei „scope‟” (scope of law) whose English translation order is generally inverted, some methods have been proposed to improve the reordering model for SMT based on the collocated words crossing the neighboring components (Xiong et al., 2006). We further notice that some words are translated in different orders when they are collocated with different words. For instance, when “潮流 chao-liu „trend‟” is collocated with “时代 shi-dai „times‟”, they are often translated into the “trend of times”; when collocated with “历史 li-shi „history‟”, the translation usually becomes the “historical trend”. Thus, if we can automatically detect the collocations in the sentence to be translated and their orders in the target language, the reordering information of the collocations could be used to constrain the reordering of phrases during decoding. Therefore, in this paper, we propose to improve the reordering model for SMT by estimating the reordering score based on the translation orders of the source collocations. In general, the collocations can be automatically identified based on syntactic information such as dependency trees (Lin, 1998). However these methods may suffer from parsing errors. Moreover, for many languages, no valid dependency parser exists. Liu et al. (2009) proposed to automatically detect the collocated words in a sentence with the MWA method. The advantage of this method lies in that it can identify the collocated words in a sentence without additional resources. In this paper, we employ MWA Model l~3 described in Liu et al. (2009) to detect collocations in sentences, which are shown in Eq. (1)~(3).    l j c j j w w t S A p 1 1 Model MWA ) | ( ) | ( (1)     l j j c j l c j d w w t S A p j 1 2 Model MWA ) , | ( ) | ( ) | ( (2)         l j j c j l i i i l c j d w w t w n S A p j 1 1 3 Model MWA ) , | ( ) | ( ) | ( ) | ( (3) Where l w S 1  is a monolingual sentence; i denotes the number of words collocating with iw ; } & ] ,1[ |) , {( i c l i c i A i i    denotes the potentially collocated words in S. The MWA models measure the collocated words under different constraints. MWA Model 1 only models word collocation probabilities ) | ( jc j w w t . MWA Model 2 additionally employs position collocation probabilities ) , | ( l c j d j . Besides the features in MWA Model 2, MWA Model 3 also considers fertility probabilities ) | ( i i w n  . Given a sentence, the optimal collocated words can be obtained according to Eq. (4). ) | ( max arg * Model MWA S A p A i A  (4) Given a monolingual word aligned corpus, the collocation probabilities can be estimated as follows. 2 ) | ( ) | ( ) , ( i j j i j i w w p w w p w w r   (5) Where,     w j j i j i w w count w w count w w p ) , ( ) , ( ) | ( ; ) , ( j i w w denotes the collocated words in the corpus and ) , ( j i w w count denotes the co-occurrence frequency. 1037 3 Reordering Model with Source Language Collocations In this section, we first describe how to estimate the orientation probabilities for a given collocation, and then describe the estimation of the reordering score during translation. Finally, we describe the integration of the reordering model into the SMT system. 3.1 Reordering probability estimation Given a source collocation ) , ( j i f f and its corresponding translations ) , ( j i a a e e in a bilingual sentence pair, the reordering orientation of the collocation can be defined as in Eq. (6).             j i j i j i j i a a j i a a j i a a j i a a j i a a j i o j i & or & if inverted & or & if straight , , , (6) In our method, only those collocated words in source language that are aligned to different target words, are taken into consideration, and those being aligned to the same target word are ignored. Given a word-aligned bilingual corpus where the collocations in source sentences are detected, the probabilities of the translation orientation of collocations in the source language can be estimated, as follows:      o j i j i j i f f o count f f o count f f o p ) , , ( ) , , straight ( ) , | straight ( (7)      o j i j i j i f f o count f f o count f f o p ) , , ( ) , , inverted ( ) , | inverted ( (8) Here, ) , , ( j i f f o count is collected according to the algorithm in Figure 1. 3.2 Reordering model Given a sentence lf F 1  to be translated, the collocations are first detected using the algorithm described in Eq. (4). Then the reordering score is estimated according to the reordering probability weighted by the collocation probability of the collocated words. Formally, for a generated translation candidate T , the reordering score is calculated as follows. ) , | ( log ) , ( ) , ( , , , ) , ( i ic i i i i c i a a c i c i c i O f f o p f f r T F P   (9) Input: A word-aligned bilingual corpus where the source collocations are detected Initialization: ) , , ( j i f f o count =0 for each sentence pair <F, E> in the corpus do for each collocated word pair ) , ( ic i f f in F do if ic i i a a c i   & or ic i i a a c i   & then    ) , , ( ic i f f straight o count if ic i i a a c i   & or ic i i a a c i   & then    ) , , ( ic i f f inverted o count Output: ) , , ( j i f f o count Figure 1. Algorithm of estimating reordering frequency Here, ) , ( ic i f f r denotes the collocation probability of if and icf as shown in Eq. (5). In addition to the detected collocated words in the sentence, we also consider other possible word pairs whose collocation probabilities are higher than a given threshold. Thus, the reordering score is further improved according to Eq. (10).            ) , ( & )} , {( ) , ( , , , , , , ) , ( )} , | ( log ) , ( ) , | ( log ) , ( ) , ( j i i j i i ic i i i i f f r c i j i j i a a j i j i c i a a c i c i c i O f f o p f f r f f o p f f r T F P (10) Where and  are two interpolation weights.  is the threshold of collocation probability. The weights and the threshold can be tuned using a development set. 3.3 Integrated into SMT system The SMT systems generally employ the log-linear model to integrate various features (Chiang, 2005; Koehn et al., 2007). Given an input sentence F, the final translation E* with the highest score is chosen from candidates, as in Eq. (11). }) , ( { max arg * 1   M m m m E F E h E  (11) Where hm(E, F) (m=1,...,M) denotes features. m  is a feature weight. Our reordering model can be integrated into the system as one feature as shown in (10). 1038 Figure 2. An example for reordering 4 Evaluation of Our Method 4.1 Implementation We implemented our method in a phrase-based SMT system (Koehn et al., 2007). Based on the GIZA++ package (Och and Ney, 2003), we implemented a MWA tool for collocation detection. Thus, given a sentence to be translated, we first identify the collocations in the sentence, and then estimate the reordering score according to the translation hypothesis. For a translation option to be expanded, the reordering score inside this source phrase is calculated according to their translation orders of the collocations in the corresponding target phrase. The reordering score crossing the current translation option and the covered parts can be calculated according to the relative position of the collocated words. If the source phrase matched by the current translation option is behind the covered parts in the source sentence, then ...) | staight ( log  o p is used, otherwise ...) | inverted ( log  o p . For example, in Figure 2, the current translation option is ( 4 3 3 2 e e f f  ). The collocations related to this translation option are ) , ( 3 1 f f , ) , ( 3 2 f f , ) , ( 5 3 f f . The reordering scores can be estimated as follows: ) , | straight ( log ) , ( 3 1 3 1 f f o p f f r  ) , | inverted ( log ) , ( 3 2 3 2 f f o p f f r  ) , | inverted ( log ) , ( 5 3 5 3 f f o p f f r  In order to improve the performance of the decoder, we design a heuristic function to estimate the future score, as shown in Figure 3. For any uncovered word and its collocates in the input sentence, if the collocate is uncovered, then the higher reordering probability is used. If the collocate has been covered, then the reordering orientation can Input: Input sentence L f F 1  Initialization: Score = 0 for each uncovered word if do for each word jf ( ic j  or   ) ( , j i f f r ) do if jf is covered then if i > j then Score+= ) , | straight ( log ) ( , j i j i f f o p f f r  else Score+= ) , | inverted ( log ) ( , j i j i f f o p f f r  else Score += ) , | ( log ) ( max arg , j i j i o f f o p f f r Output: Score Figure 3. Heuristic function for estimating future score be determined according to the relative positions of the words and the corresponding reordering probability is employed. 4.2 Settings We use the FBIS corpus (LDC2003E14) to train a Chinese-to-English phrase-based translation model. And the SRI language modeling toolkit (Stolcke, 2002) is used to train a 5-gram language model on the English sentences of FBIS corpus. We used the NIST evaluation set of 2002 as the development set to tune the feature weights of the SMT system and the interpolation parameters, based on the minimum error rate training method (Och, 2003), and the NIST evaluation sets of 2004 and 2008 (MT04 and MT08) as the test sets. We use BLEU (Papineni et al., 2002) as evaluation metrics. We also calculate the statistical significance differences between our methods and the baseline method by using the paired bootstrap resample method (Koehn, 2004). 4.3 Translation results We compare the proposed method with various reordering methods in previous work. Monotone model: no reordering model is used. Distortion based reordering (DBR) model: a distortion based reordering method (AlOnaizan & Papineni, 2006). In this method, the distortion cost is defined in terms of words, rather than phrases. This method considers outbound, inbound, and pairwise distortions that f1 f2 f3 f4 f5 e4 e3 e2 e1 1039 Reorder models MT04 MT08 Monotone model 26.99 18.30 DBR model 26.64 17.83 MSDR model (Baseline) 28.77 18.42 MSDR+ DBR model 28.91 18.58 SCBR Model 1 29.21 19.28 SCBR Model 2 29.44 19.36 SCBR Model 3 29.50 19.44 SCBR models (1+2) 29.65 19.57 SCBR models (1+2+3) 29.75 19.61 Table 1. Translation results on various reordering models T1: The two sides are also the basic stand of not relaxed. T2: The basic stance of the two sides have not relaxed. Reference: The basic stances of both sides did not move. Figure 4. Translation example. (*/*) denotes (pstraight / pinverted) are directly estimated by simple counting over alignments in the word-aligned bilingual corpus. This method is similar to our proposed method. But our method considers the translation order of the collocated words. msd-bidirectional-fe reordering (MSDR or Baseline) model: it is one of the reordering models in Moses. It considers three different orientation types (monotone, swap, and discontinuous) on both source phrases and target phrases. And the translation orders of both the next phrase and the previous phrase in respect to the current phrase are modeled. Source collocation based reordering (SCBR) model: our proposed method. We investigate three reordering models based on the corresponding MWA models and their combinations. In SCBR Model i (i=1~3), we use MWA Model i as described in section 2 to obtain the collocated words and estimate the reordering probabilities according to section 3. The experiential results are shown in Table 1. The DBR model suffers from serious data sparseness. For example, the reordering cases in the trained pairwise distortion model only covered 32~38% of those in the test sets. So its performance is worse than that of the monotone model. The MSDR model achieves higher BLEU scores than the monotone model and the DBR model. Our models further improve the translation quality, achieving better performance than the combination of MSDR model and DBR model. The results in Table 1 show that “MSDR + SCBR Model 3” performs the best among the SCBR models. This is because, as compared to MWA Model 1 and 2, MWA Model 3 takes more information into consideration, including not only the co-occurrence information of lexical tokens and the position of words, but also the fertility of words in a sentence. And when the three SCBR models are combined, the performance of the SMT system is further improved. As compared to other reordering models, our models achieve an absolute improvement of 0.98~1.19 BLEU score on the test sets, which are statistically significant (p < 0.05). Figure 4 shows an example: T1 is generated by the baseline system and T2 is generated by the system where the SCBR models (1+2+3)1 are used. 1 In the remainder of this paper, “SCBR models” means the combination of the SCBR models (1+2+3) unless it is explicitly explained. Input: 双方 的 基本 立场 也 都 没有 松动 。 shuang-fang DE ji-ben li-chang ye dou mei-you song-dong . (0.99/0.01) both-side DE basic stance also both not loose . (0.21/0.79) (0.95/0.05) 1040 Reordering models MT04 MT08 MSDR model 28.77 18.42 MSDR+ DBR model 28.91 18.58 CBR model 28.96 18.77 WCBR model 29.15 19.10 WCBR+SCBR models 29.87 19.83 Table 2. Translation results of co-occurrence based reordering models CBR model SCBR Model3 Consecutive words 77.9% 73.5% Interrupted words 74.1% 87.8% Total 74.3% 84.9% Table 3. Precisions of the reordering models on the development set The input sentence contains three collocations. The collocation (基本, 立场) is included in the same phrase and translated together as a whole. Thus its translation is correct in both translations. For the other two long-distance collocations (双方, 立场) and (立场, 松动), their translation orders are not correctly handled by the reordering model in the baseline system. For the collocation (双方, 立场), since the SCBR models indicate p(o=straight|双方, 立场) < p(o=inverted|双方, 立场), the system finally generates the translation T2 by constraining their translation order with the proposed model. 5 Collocations vs. Co-occurring Words We compared our method with the method that models the reordering orientations based on cooccurring words in the source sentences, rather than the collocations. 5.1 Co-occurrence based reordering model We use the similar algorithm described in section 3 to train the co-occurrence based reordering (CBR) model, except that the probability of the reordering orientation is estimated on the co-occurring words and the relative distance. Given an input sentence and a translation candidate, the reordering score is estimated as shown in Eq. (12).     ) , ( , , , ) , , | ( log ) , ( j i j i j i a a j i O f f o p T F P j i (12) Here, j i  is the relative distance of two words in the source sentence. We also construct the weighted co-occurrence based reordering (WCBR) model. In this model, the probability of the reordering orientation is additionally weighted by the pointwise mutual information 2 score of the two words (Manning and Schütze, 1999), which is estimated as shown in Eq. (13).     ) , ( , , , MI ) , , | ( log ) , ( ) , ( j i j i j i a a j i j i O f f o p f f s T F P j i (13) 5.2 Translation results Table 2 shows the translation results. It can be seen that the performance of the SMT system is improved by integrating the CBR model. The performance of the CBR model is also better than that of the DBR model. It is because the former is trained based on all co-occurring aligned words, while the latter only considers the adjacent aligned words. When the WCBR model is used, the translation quality is further improved. However, its performance is still inferior to that of the SCBR models, indicating that our method (SCBR models) of modeling the translation orders of source collocations is more effective. Furthermore, we combine the weighted co-occurrence based model and our method, which outperform all the other models. 5.3 Result analysis Precision of prediction First of all, we investigate the performance of the reordering models by calculating precisions of the translation orders predicted by the reordering models. Based on the source sentences and reference translations of the development set, where the source words and target words are automatically aligned by the bilingual word alignment method, we construct the reference translation orders for two words. Against the references, we calculate three kinds of precisions as follows: |} 1 | || {| |} & 1 {| , , , , , CW       j i o o o j| |i P j i a a j i j i j i (14) 2 For occurring words extraction, the window size is set to [-6, +6]. 1041 |} 1 | || {| |} & 1 {| , , , , , IW       j i o o o j| |i P j i a a j i j i j i (15) |} {| |} {| , , , , , total j i a a j i j i o o o P j i   (16) Here, j io , denotes the translation order of ( j i f f , ) predicted by the reordering models. If ) | straight ( , j i f f o p  > ) , inverted ( j i f |f o p  , then straight ,  j io , else if ) | straight ( , j i f f o p  < ) , inverted ( j i f |f o p  , then inverted ,  j io . j i a a j io , , , denotes the translation order derived from the word alignments. If j i a a j i j i o o , , , ,  , then the predicted translation order is correct, otherwise wrong. CW P and IW P denote the precisions calculated on the consecutive words and the interrupted words in the source sentences, respectively. total P denotes the precision on both cases. Here, the CBR model and SCBR Model 3 are compared. The results are shown in Table 3. From the results in Table 3, it can be seen that the CBR model has a higher precision on the consecutive words than the SCBR model, but lower precisions on the interrupted words. It is mainly because the CBR model introduces more noise when the relative distance of words is set to a large number, while the MWA method can effectively detect the long-distance collocations in sentences (Liu et al., 2009). This explains why the combination of the two models can obtain the highest BLEU score as shown in Table 2. On the whole, the SCBR Model 3 achieves higher precision than the CBR model. Effect of the reordering model Then we evaluate the reordering results of the generated translations in the test sets. Using the above method, we construct the reference translation orders of collocations in the test sets. For a given word pair in a source sentence, if the translation order in the generated translation is the same as that in the reference translations, then it is correct, otherwise wrong. We compare the translations of the baseline method, the co-occurrence based method, and our method (SCBR models). The precisions calculated on both kinds of words are shown in Table 4. From Test sets Baseline (MSDR) MSDR+ WCBR MSDR+ SCBR MT04 78.9% 80.8% 82.5% MT08 80.7% 83.8% 85.0% Table 4. Precisions (total) of the reordering models on the test sets the results, it can be seen that our method achieves higher precisions than both the baseline and the method modeling the translation orders of the cooccurring words. It indicates that the proposed method effectively constrains the reordering of source words during decoding and improves the translation quality. 6 Related Work Reordering was first proposed in the IBM models (Brown et al., 1993), later was named IBM constraint by Berger et al. (1996). This model treats the source word sequence as a coverage set that is processed sequentially and a source token is covered when it is translated into a new target token. In 1997, another model called ITG constraint was presented, in which the reordering order can be hierarchically modeled as straight or inverted for two nodes in a binary branching structure (Wu, 1997). Although the ITG constraint allows more flexible reordering during decoding, Zens and Ney (2003) showed that the IBM constraint results in higher BLEU scores. Our method models the reordering of collocated words in sentences instead of all words in IBM models or two neighboring blocks in ITG models. For phrase-based SMT models, Koehn et al. (2003) linearly modeled the distance of phrase movements, which results in poor global reordering. More methods are proposed to explicitly model the movements of phrases (Tillmann, 2004; Koehn et al., 2005) or to directly predict the orientations of phrases (Tillmann and Zhang, 2005; Zens and Ney, 2006), conditioned on current source phrase or target phrase. Hierarchical phrasebased SMT methods employ SCFG bilingual translation model and allow flexible reordering (Chiang, 2005). However, these methods ignored the correlations among words in the source language or in the target language. In our method, we automatically detect the collocated words in sentences and 1042 their translation orders in the target languages, which are used to constrain the ordering models with the estimated reordering (straight or inverted) score. Moreover, our method allows flexible reordering by considering both consecutive words and interrupted words. In order to further improve translation results, many researchers employed syntax-based reordering methods (Zhang et al., 2007; Marton and Resnik, 2008; Ge, 2010; Visweswariah et al., 2010). However these methods are subject to parsing errors to a large extent. Our method directly obtains collocation information without resorting to any linguistic knowledge or tools, therefore is suitable for any language pairs. In addition, a few models employed the collocation information to improve the performance of the ITG constraints (Xiong et al., 2006). Xiong et al. used the consecutive co-occurring words as collocation information to constrain the reordering, which did not lead to higher translation quality in their experiments. In our method, we first detect both consecutive and interrupted collocated words in the source sentence, and then estimated the reordering score of these collocated words, which are used to softly constrain the reordering of source phrases. 7 Conclusions We presented a novel model to improve SMT by means of modeling the translation orders of source collocations. The model was learned from a wordaligned bilingual corpus where the potentially collocated words in source sentences were automatically detected by the MWA method. During decoding, the model is employed to softly constrain the translation orders of the source language collocations. Since we only model the reordering of collocated words, our methods can partially alleviate the data sparseness encountered by other methods directly modeling the reordering based on source phrases or target phrases. In addition, this kind of reordering information can be integrated into any SMT systems without resorting to any additional resources. The experimental results show that the proposed method significantly improves the translation quality of a phrase based SMT system, achieving an absolute improvement of 1.1~1.4 BLEU score over the baseline methods. References Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion Models for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pp. 529-536. Adam L. Berger, Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Andrew S. Kehler, and Robert L. Mercer. 1996. Language Translation Apparatus and Method of Using Context-Based Translation Models. United States Patent, Patent Number 5510981, April. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter estimation. Computational Linguistics, 19(2): 263311. David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pp. 263-270. Niyu Ge. 2010. A Direct Syntax-Driven Reordering Model for Phrase-Based Machine Translation. In Proceedings of Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, pp. 849-857. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pp. 388-395. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Computational Linguistics, pp. 127-133. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the ACL, Poster and Demonstration Sessions, pp. 177-180. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In Proceedings of International Workshop on Spoken Language Translation. 1043 Dekang Lin. 1998. Extracting Collocations from Text Corpora. In Proceedings of the 1st Workshop on Computational Terminology, pp. 57-63. Zhanyi Liu, Haifeng Wang, Hua Wu, and Sheng Li. 2009. Collocation Extraction Using Monolingual Word Alignment Method. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pp. 487-495. Christopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing, Cambridge, MA; London, U.K.: Bradford Book & MIT Press. Yuval Marton and Philip Resnik. 2008. Soft Syntactic Constraints for Hierarchical Phrased-based Translation. In Proceedings of the 46st Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 1003-1011. Kathleen R. McKeown and Dragomir R. Radev. 2000. Collocations. In Robert Dale, Hermann Moisl, and Harold Somers (Ed.), A Handbook of Natural Language Processing, pp. 507-523. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 160-167. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 29(1) : 19-51. Kishore Papineni, Salim Roukos, Todd Ward, and Weijing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pp. 311-318. Andreas Stolcke. 2002. SRILM - An Extensible Language Modeling Toolkit. In Proceedings for the International Conference on Spoken Language Processing, pp. 901-904. Christoph Tillmann. 2004. A Unigram Orientation Model for Statistical Machine Translation. In Proceedings of the Joint Conference on Human Language Technologies and the Annual Meeting of the North American Chapter of the Association of Computational Linguistics, pp. 101-104. Christoph Tillmann and Tong Zhang. 2005. A Localized Prediction Model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pp. 557-564. Karthik Visweswariah, Jiri Navratil, Jeffrey Sorensen, Vijil Chenthamarakshan, and Nanda Kambhatla. 2010. Syntax Based Reordering with Automatically Derived Rules for Improved Statistical Machine Translation. In Proceedings of the 23rd International Conference on Computational Linguistics, pp. 11191127. Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3):377-403. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pp. 521-528. Richard Zens and Herman Ney. 2003. A Comparative Study on Reordering Constraints in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 192-202. Richard Zens and Herman Ney. 2006. Discriminative Reordering Models for Statistical Machine Translation. In Proceedings of the Workshop on Statistical Machine Translation, pp. 55-63. Dongdong Zhang, Mu Li, Chi-Ho Li, and Ming Zhou. 2007. Phrase Reordering Model Integrating Syntactic Knowledge for SMT. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 533-540. 1044
2011
104
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1045–1054, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Joint Sequence Translation Model with Integrated Reordering Nadir Durrani Helmut Schmid Alexander Fraser Institute for Natural Language Processing University of Stuttgart {durrani,schmid,fraser}@ims.uni-stuttgart.de Abstract We present a novel machine translation model which models translation by a linear sequence of operations. In contrast to the “N-gram” model, this sequence includes not only translation but also reordering operations. Key ideas of our model are (i) a new reordering approach which better restricts the position to which a word or phrase can be moved, and is able to handle short and long distance reorderings in a unified way, and (ii) a joint sequence model for the translation and reordering probabilities which is more flexible than standard phrase-based MT. We observe statistically significant improvements in BLEU over Moses for German-to-English and Spanish-to-English tasks, and comparable results for a French-to-English task. 1 Introduction We present a novel generative model that explains the translation process as a linear sequence of operations which generate a source and target sentence in parallel. Possible operations are (i) generation of a sequence of source and target words (ii) insertion of gaps as explicit target positions for reordering operations, and (iii) forward and backward jump operations which do the actual reordering. The probability of a sequence of operations is defined according to an N-gram model, i.e., the probability of an operation depends on the n −1 preceding operations. Since the translation (generation) and reordering operations are coupled in a single generative story, the reordering decisions may depend on preceding translation decisions and translation decisions may depend on preceding reordering decisions. This provides a natural reordering mechanism which is able to deal with local and long-distance reorderings in a consistent way. Our approach can be viewed as an extension of the N-gram SMT approach (Mari˜no et al., 2006) but our model does reordering as an integral part of a generative model. The paper is organized as follows. Section 2 discusses the relation of our work to phrase-based and the N-gram SMT. Section 3 describes our generative story. Section 4 defines the probability model, which is first presented as a generative model, and then shifted to a discriminative framework. Section 5 provides details on the search strategy. Section 6 explains the training process. Section 7 describes the experimental setup and results. Section 8 gives a few examples illustrating different aspects of our model and Section 9 concludes the paper. 2 Motivation and Previous Work 2.1 Relation of our work to PBSMT Phrase-based SMT provides a powerful translation mechanism which learns local reorderings, translation of short idioms, and the insertion and deletion of words sensitive to local context. However, PBSMT also has some drawbacks. (i) Dependencies across phrases are not directly represented in the translation model. (ii) Discontinuous phrases cannot be used. (iii) The presence of many different equivalent segmentations increases the search space. Phrase-based SMT models dependencies between words and their translations inside of a phrase well. However, dependencies across phrase boundaries are largely ignored due to the strong phrasal inde1045 German English hat er ein buch gelesen he read a book hat eine pizza gegessen has eaten a pizza er he hat has ein a eine a menge lot of butterkekse butter cookies gegessen eaten buch book zeitung newspaper dann then Table 1: Sample Phrase Table pendence assumption. A phrase-based system using the phrase table1 shown in Table 1, for example, correctly translates the German sentence “er hat eine pizza gegessen” to “he has eaten a pizza”, but fails while translating “er hat eine menge butterkekse gegessen” (see Table 1 for a gloss) which is translated as “he has a lot of butter cookies eaten” unless the language model provides strong enough evidence for a different ordering. The generation of this sentence in our model starts with generating “er – he”, “hat – has”. Then a gap is inserted on the German side, followed by the generation of “gegessen – eaten”. At this point, the (partial) German and English sentences look as follows: er hat gegessen he has eaten We jump back to the gap on the German side and fill it by generating “eine – a” and “pizza – pizza”, for the first example and generating “eine – a”, “menge – lot of”, “butterkekse – butter cookies” for the second example, thus handling both short and long distance reordering in a unified manner. Learning the pattern “hat gegessen – has eaten” helps us to generalize to the second example with unseen context. Notice how the reordering decision is triggered by the translation decision in our model. The probability of a gap insertion operation after the generation of the auxiliaries “hat – has” will be high because reordering is necessary in order to move the second part of the German verb complex (“gegessen”) to its correct position at the end of the clause. This mechanism better restricts reordering 1The examples given in this section are not taken from the real data/system, but made-up for the sake of argument. Figure 1: (a) Known Context (b) Unknown Context than traditional PBSMT and is able to deal with local and long-distance reorderings in a consistent way. Another weakness of the traditional phrase-based system is that it can only capitalize on continuous phrases. Given the phrase inventory in Table 1, phrasal MT is able to generate example in Figure 1(a). The information “hat...gelesen – read” is internal to the phrase pair “hat er ein buch gelesen – he read a book”, and is therefore handled conveniently. On the other hand, the phrase table does not have the entry “hat er eine zeitung gelesen – he read a newspaper” (Figure 1(b)). Hence, there is no option but to translate “hat...gelesen” separately, translating “hat” to “has” which is a common translation for “hat” but wrong in the given context. Context-free hierarchical models (Chiang, 2007; Melamed, 2004) have rules like “hat er X gelesen – he read X” to handle such cases. Galley and Manning (2010) recently solved this problem for phrasal MT by extracting phrase pairs with source and target-side gaps. Our model can also use tuples with source-side discontinuities. The above sentence would be generated by the following sequence of operations: (i) generate “dann – then” (ii) insert a gap (iii) generate “er – he” (iv) backward jump to the gap (v) generate “hat...[gelesen] – read” (only “hat” and “read” are added to the sentences yet) (vi) jump forward to the right-most source word so far generated (vii) insert a gap (viii) continue the source cept (“gelesen” is inserted now) (ix) backward jump to the gap (x) generate “ein – a” (xi) generate “buch – book”. Figure 2: Pattern From this operation sequence, the model learns a pattern (Figure 2) which allows it to generalize to the example in Figure 1(b). The open gap represented by serves a similar purpose as the non-terminal categories in a hierarchical phrase-based system such as Hiero. Thus it generalizes to translate “eine zeitung” in exactly the same way as “ein buch”. 1046 Another problem of phrasal MT is spurious phrasal segmentation. Given a sentence pair and a corresponding word alignment, phrasal MT can learn an arbitrary number of source segmentations. This is problematic during decoding because different compositions of the same minimal phrasal units are allowed to compete with each other. 2.2 Relation of our work to N-gram SMT N-gram based SMT is an alternative to hierarchical and non-hierarchical phrase-based systems. The main difference between phrase-based and N-gram SMT is the extraction procedure of translation units and the statistical modeling of translation context (Crego et al., 2005a). The tuples used in N-gram systems are much smaller translation units than phrases and are extracted in such a way that a unique segmentation of each bilingual sentence pair is produced. This helps N-gram systems to avoid the spurious phrasal segmentation problem. Reordering works by linearization of the source side and tuple unfolding (Crego et al., 2005b). The decoder uses word lattices which are built with linguistically motivated re-write rules. This mechanism is further enhanced with an N-gram model of bilingual units built using POS tags (Crego and Yvon, 2010). A drawback of their reordering approach is that search is only performed on a small number of reorderings that are pre-calculated on the source side independently of the target side. Often, the evidence for the correct ordering is provided by the target-side language model (LM). In the N-gram approach, the LM only plays a role in selecting between the precalculated orderings. Our model is based on the N-gram SMT model, but differs from previous N-gram systems in some important aspects. It uses operation n-grams rather than tuple n-grams. The reordering approach is entirely different and considers all possible orderings instead of a small set of pre-calculated orderings. The standard N-gram model heavily relies on POS tags for reordering and is unable to use lexical triggers whereas our model exclusively uses lexical triggers and no POS information. Linearization and unfolding of the source sentence according to the target sentence enables N-gram systems to handle sourceside gaps. We deal with this phenomenon more directly by means of tuples with source-side discontinuities. The most notable feature of our work is that it has a complete generative story of translation which combines translation and reordering operations into a single operation sequence model. Like the N-gram model2, our model cannot deal with target-side discontinuities. These are eliminated from the training data by a post-editing process on the alignments (see Section 6). Galley and Manning (2010) found that target-side gaps were not useful in their system and not useful in the hierarchical phrase-based system Joshua (Li et al., 2009). 3 Generative Story Our generative story is motivated by the complex reorderings in the German-to-English translation task. The German and English sentences are jointly generated through a sequence of operations. The English words are generated in linear order3 while the German words are generated in parallel with their English translations. Occasionally the translator jumps back on the German side to insert some material at an earlier position. After this is done, it jumps forward again and continues the translation. The backward jumps always end at designated landing sites (gaps) which were explicitly inserted before. We use 4 translation and 3 reordering operations. Each is briefly discussed below. Generate (X,Y): X and Y are German and English cepts4 respectively, each with one or more words. Words in X (German) may be consecutive or discontinuous, but the words in Y (English) must be consecutive. This operation causes the words in Y and the first word in X to be added to the English and German strings respectively, that were generated so far. Subsequent words in X are added to a queue to be generated later. All the English words in Y are generated immediately because English is generated in linear order. The generation of the second (and subsequent) German word in a multi-word cept can be delayed by gaps, jumps and the Generate Source Only operation defined below. Continue Source Cept: The German words added 2However, Crego and Yvon (2009), in their N-gram system, use split rules to handle target-side gaps and show a slight improvement on a Chinese-English translation task. 3Generating the English words in order is also what the decoder does when translating from German to English. 4A cept is a group of words in one language translated as a minimal unit in one specific context (Brown et al., 1993). 1047 to the queue by the Generate (X,Y) operation are generated by the Continue Source Cept operation. Each Continue Source Cept operation removes one German word from the queue and copies it to the German string. If X contains more than one German word, say n many, then it requires n translation operations, an initial Generate (X1...Xn, Y ) operation and n −1 Continue Source Cept operations. For example “hat...gelesen – read” is generated by the operation Generate (hat gelesen, read), which adds “hat” and “read” to the German and English strings and “gelesen” to a queue. A Continue Source Cept operation later removes “gelesen” from the queue and adds it to the German string. Generate Source Only (X): The string X is added at the current position in the German string. This operation is used to generate a German word X with no corresponding English word. It is performed immediately after its preceding German word is covered. This is because there is no evidence on the Englishside which indicates when to generate X. Generate Source Only (X) helps us learn a source word deletion model. It is used during decoding, where a German word (X) is either translated to some English word(s) by a Generate (X,Y) operation or deleted with a Generate Source Only (X) operation. Generate Identical: The same word is added at the current position in both the German and English strings. The Generate Identical operation is used during decoding for the translation of unknown words. The probability of this operation is estimated from singleton German words that are translated to an identical string. For example, for a tuple “Portland – Portland”, where German “Portland” was observed exactly once during training, we use a Generate Identical operation rather than Generate (Portland, Portland). We now discuss the set of reordering operations used by the generative story. Reordering has to be performed whenever the German word to be generated next does not immediately follow the previously generated German word. During the generation process, the translator maintains an index which specifies the position after the previously covered German word (j), an index (Z) which specifies the index after the right-most German word covered so far, and an index of the next German word to be covered (j′). The set of reordering operations used in Table 2: Step-wise Generation of Example 1(a). The arrow indicates position j. generation depends upon these indexes. Insert Gap: This operation inserts a gap which acts as a place-holder for the skipped words. There can be more than one open gap at a time. Jump Back (W): This operation lets the translator jump back to an open gap. It takes a parameter W specifying which gap to jump to. Jump Back (1) jumps to the closest gap to Z, Jump Back (2) jumps to the second closest gap to Z, etc. After the backward jump the target gap is closed. Jump Forward: This operation makes the translator jump to Z. It is performed if some already generated German word is between the previously generated word and the word to be generated next. A Jump Back (W) operation is only allowed at position Z. Therefore, if j ̸= Z, a Jump Forward operation has to be performed prior to a Jump Back operation. Table 2 shows step by step the generation of a German/English sentence pair, the corresponding translation operations, and the respective values of the index variables. A formal algorithm for converting a word-aligned bilingual corpus into an operation sequence is presented in Algorithm 1. 4 Model Our translation model p(F, E) is based on operation N-gram model which integrates translation and reordering operations. Given a source string F, a sequence of tuples T = (t1, . . . , tn) as hypothesized by the decoder to generate a target string E, the translation model estimates the probability of a 1048 Algorithm 1 Corpus Conversion Algorithm i Position of current English cept j Position of current German word j′ Position of next German word N Total number of English cepts fj German word at position j Ei English cept at position i Fi Sequence of German words linked to Ei Li Number of German words linked with Ei k Number of already generated German words for Ei aik Position of kth German translation of Ei Z Position after right-most generated German word S Position of the first word of a target gap i := 0; j := 0; k := 0 while fj is an unaligned word do Generate Source Only (fj) j := j + 1 Z := j while i < N do j′ := aik if j < j′ then if fj was not generated yet then Insert Gap if j = Z then j := j′ else Jump Forward if j′ < j then if j < Z and fj was not generated yet then Insert Gap W := relative position of target gap Jump Back (W) j := S if j < j′ then Insert Gap j := j′ if k = 0 then Generate (Fi, Ei) {or Generate Identical} else Continue Source Cept j := j + 1; k := k + 1 while fj is an unaligned word do Generate Source Only (fj) j := j + 1 if Z < j then Z := j if k = Li then i := i + 1; k := 0 Remarks: We use cept positions for English (not word positions) because English cepts are composed of consecutive words. German positions are word-based. The relative position of the target gap is 1 if it is closest to Z, 2 if it is the second closest gap etc. The operation Generate Identical is chosen if Fi = Ei and the overall frequency of the German cept Fi is 1. generated operation sequence O = (o1, . . . , oJ) as: p(F, E) ≈ J Y j=1 p(oj|oj−m+1...oj−1) where m indicates the amount of context used. Our translation model is implemented as an N-gram model of operations using SRILM-Toolkit (Stolcke, 2002) with Kneser-Ney smoothing. We use a 9-gram model (m = 8). Integrating the language model the search is defined as: ˆE = arg max E pLM(E)p(F, E) where pLM(E) is the monolingual language model and p(F, E) is the translation model. But our translation model is a joint probability model, because of which E is generated twice in the numerator. We add a factor, prior probability ppr(E), in the denominator, to negate this effect. It is used to marginalize the joint-probability model p(F, E). The search is then redefined as: ˆE = arg max E pLM(E)p(F, E) ppr(E) Both, the monolingual language and the prior probability model are implemented as standard word-based n-gram models: px(E) ≈ J Y j=1 p(wj|wj−m+1, . . . , wj−1) where m = 4 (5-gram model) for the standard monolingual model (x = LM) and m = 8 (same as the operation model5) for the prior probability model (x = pr). In order to improve end-to-end accuracy, we introduce new features for our model and shift from the generative6 model to the standard log-linear approach (Och and Ney, 2004) to tune7 them. We search for a target string E which maximizes a linear combination of feature functions: 5In decoding, the amount of context used for the prior probability is synchronized with the position of back-off in the operation model. 6Our generative model is about 3 BLEU points worse than the best discriminative results. 7We tune the operation, monolingual and prior probability models as separate features. We expect the prior probability model to get a negative weight but we do not force MERT to assign a negative weight to this feature. 1049 ˆE = arg max E    J X j=1 λjhj(F, E)    where λj is the weight associated with the feature hj(F, E). Other than the 3 features discussed above (log probabilities of the operation model, monolingual language model and prior probability model), we train 8 additional features discussed below: Length Bonus The length bonus feature counts the length of the target sentence in words. Deletion Penalty Another feature for avoiding too short translations is the deletion penalty. Deleting a source word (Generate Source Only (X)) is a common operation in the generative story. Because there is no corresponding target-side word, the monolingual language model score tends to favor this operation. The deletion penalty counts the number of deleted source words. Gap Bonus and Open Gap Penalty These features are introduced to guide the reordering decisions. We observe a large amount of reordering in the automatically word aligned training text. However, given only the source sentence (and little world knowledge), it is not realistic to try to model the reasons for all of this reordering. Therefore we can use a more robust model that reorders less than humans. The gap bonus feature sums to the total number of gaps inserted to produce a target sentence. The open gap penalty feature is a penalty (paid once for each translation operation performed) whose value is the number of open gaps. This penalty controls how quickly gaps are closed. Distortion and Gap Distance Penalty We have two additional features to control the reordering decisions. One of them is similar8 to the distancebased reordering model used by phrasal MT. The other feature is the gap distance penalty which calculates the distance between the first word of a source cept X and the start of the left-most gap. This cost is paid once for each Generate, Generate Identical and Generate Source Only. For a source cept coverd by indexes X1, . . . , Xn, we get the feature value gj = X1 −S, where S is the index of the left-most source word where a gap starts. 8Let X1, . . . , Xn and Y1, . . . , Ym represent indexes of the source words covered by the tuples tj and tj−1 respectively. The distance between tj and tj−1 is given as dj = min(|Xk − Yl| −1) ∀Xk ∈{X1, . . . , Xn} and ∀Yl ∈{Y1, . . . , Ym} Lexical Features We also use source-to-target p(e|f) and target-to-source p(f|e) lexical translation probabilities. Our lexical features are standard (Koehn et al., 2003). The estimation is motivated by IBM Model-1. Given a tuple ti with source words f = f1, f2, . . . , fn, target words e = e1, e2, . . . , em and an alignment a between the source word positions x = 1, . . . , n and the target word positions y = 1, . . . , m, the lexical feature pw(f|e) is computed as follows: pw(f|e, a) = n Y x=1 1 |{y : (x, y) ∈a}| X ∀(x,y)∈a w(fx|ey) pw(e|f, a) is computed in the same way. 5 Decoding Our decoder for the new model performs a stackbased search with a beam-search algorithm similar to that used in Pharoah (Koehn, 2004a). Given an input sentence F, it first extracts a set of matching source-side cepts along with their n-best translations to form a tuple inventory. During hypothesis expansion, the decoder picks a tuple from the inventory and generates the sequence of operations required for the translation with this tuple in light of the previous hypothesis.9 The sequence of operations may include translation (generate, continue source cept etc.) and reordering (gap insertions, jumps) operations. The decoder also calculates the overall cost of the new hypothesis. Recombination is performed on hypotheses having the same coverage vector, monolingual language model context, and operation model context. We do histogrambased pruning, maintaining the 500 best hypotheses for each stack.10 9A hypothesis maintains the index of the last source word covered (j), the position of the right-most source word covered so far (Z), the number of open gaps, the number of gaps so far inserted, the previously generated operations, the generated target string, and the accumulated values of all the features discussed in Section 4. 10We need a higher beam size to produce translation units similar to the phrase-based systems. For example, the phrasebased system can learn the phrase pair “zum Beispiel – for example” and generate it in a single step placing it directly into the stack two words to the right. Our system generates this example with two separate tuple translations “zum – for” and “Beispiel – example” in two adjacent stacks. Because “zum – for” is not a frequent translation unit, it will be ranked quite low in the first stack until the tuple “Beispiel – example” appears in the second stack. Koehn and his colleagues have repeatedly shown that in1050 Figure 3: Post-editing of Alignments (a) Initial (b) No Target-Discontinuities (c) Final Alignments 6 Training Training includes: (i) post-editing of the alignments, (ii) generation of the operation sequence (iii) estimation of the n-gram language models. Our generative story does not handle target-side discontinuities and unaligned target words. Therefore we eliminate them from the training corpus in a 3-step process: If a source word is aligned with multiple target words which are not consecutive, first the link to the least frequent target word is identified, and the group of links containing this word is retained while the others are deleted. The intuition here is to keep the alignments containing content words (which are less frequent than functional words). The new alignment has no targetside discontinuities anymore, but might still contain unaligned target words. For each unaligned target word, we determine the (left or right) neighbour that it appears more frequently with and align it with the same source word as the neighbour. The result is an alignment without target-side discontinuities and unaligned target words. Figure 3 shows an illustrative example of the process. The tuples in Figure 3c are “A – U V”, “B – W X Y”, “C – NULL”, “D – Z”. We apply Algorithm 1 to convert the preprocessed aligned corpus into a sequence of translation operations. The resulting operation corpus contains one sequence of operations per sentence pair. In the final training step, the three language models are trained using the SRILM Toolkit. The operation model is estimated from the operation corpus. The prior probability model is estimated from the target side part of the bilingual corpus. The monolingual language model is estimated from the target side of the bilingual corpus and additional monolingual data. creasing the Moses stack size from 200 to 1000 does not have a significant effect on translation into English, see (Koehn and Haddow, 2009) and other shared task papers. 7 Experimental Setup 7.1 Data We evaluated the system on three data sets with German-to-English, Spanish-to-English and Frenchto-English news translations, respectively. We used data from the 4th version of the Europarl Corpus and the News Commentary which was made available for the translation task of the Fourth Workshop on Statistical Machine Translation.11 We use 200K bilingual sentences, composed by concatenating the entire news commentary (≈74K sentences) and Europarl (≈126K sentence), for the estimation of the translation model. Word alignments were generated with GIZA++ (Och and Ney, 2003), using the growdiag-final-and heuristic (Koehn et al., 2005). In order to obtain the best alignment quality, the alignment task is performed on the entire parallel data and not just on the training data we use. All data is lowercased, and we use the Moses tokenizer and recapitalizer. Our monolingual language model is trained on 500K sentences. These comprise 300K sentences from the monolingual corpus (news commentary) and 200K sentences from the target-side part of the bilingual corpus. The latter part is also used to train the prior probability model. The dev and test sets are news-dev2009a and news-dev2009b which contain 1025 and 1026 parallel sentences. The feature weights are tuned with Z-MERT (Zaidan, 2009). 7.2 Results Baseline: We compare our model to a recent version of Moses (Koehn et al., 2007) using Koehn’s training scripts and evaluate with BLEU (Papineni et al., 2002). We provide Moses with the same initial alignments as we are using to train our system.12 We use the default parameters for Moses, and a 5gram English language model (the same as in our system). We compare two variants of our system. The first system (Twno−rl) applies no hard reordering limit and uses the distortion and gap distance penalty features as soft constraints, allowing all possible reorderings. The second system (Twrl−6) uses no distortion and gap distance features, but applies a hard constraint which limits reordering to no more than 6 11http://www.statmt.org/wmt09/translation-task.html 12We tried applying our post-processing to the alignments provided to Moses and found that this made little difference. 1051 Source German Spanish French Blno−rl 17.41 19.85 19.39 Blrl−6 18.57 21.67 20.84 Twno−rl 18.97 22.17 20.94 Twrl−6 19.03 21.88 20.72 Table 3: This Work(Tw) vs Moses (Bl), no-rl = No Reordering Limit, rl-6 = Reordering limit 6 positions. Specifically, we do not extend hypotheses that are more than 6 words apart from the first word of the left-most gap during decoding. In this experiment, we disallowed tuples which were discontinuous on the source side. We compare our systems with two Moses systems as baseline, one using no reordering limit (Blno−rl) and one using the default distortion limit of 6 (Blrl−6). Both of our systems (see Table 3) outperform Moses on the German-to-English and Spanish-toEnglish tasks and get comparable results for Frenchto-English. Our best system (Twno−rl), which uses no hard reordering limit, gives statistically significant (p < 0.05)13 improvements over Moses (both baselines) for the German-to-English and Spanishto-English translation task. The results for Moses drop by more than a BLEU point without the reordering limit (see Blno−rl in Table 3). All our results are statistically significant over the baseline Blno−rl for all the language pairs. In another experiment, we tested our system also with tuples which were discontinuous on the source side. These gappy translation units neither improved the performance of the system with hard reordering limit (Twrl−6−asg) nor that of the system without reordering limit (Twno−rl−asg) as Table 4 shows. In an analysis of the output we found two reasons for this result: (i) Using tuples with source gaps increases the list of extracted n-best translation tuples exponentially which makes the search problem even more difficult. Table 5 shows the number of tuples (with and without gaps) extracted when decoding the test file with 10-best translations. (ii) The future cost14 is poorly estimated in case of tuples with gappy source cepts, causing search errors. In an experiment, we deleted gappy tuples with 13We used Kevin Gimpel’s implementation of pairwise bootstrap resampling (Koehn, 2004b), 1000 samples. 14The dynamic programming approach of calculating future cost for bigger spans gives erroneous results when gappy cepts can interleave. Details omitted due to space limitations. Source German Spanish French Twno−rl−asg 18.61 21.60 20.59 Twrl−6−asg 18.65 21.40 20.47 Twno−rl−hsg 18.91 21.93 20.87 Twrl−6−hsg 19.23 21.79 20.85 Table 4: Our Systems with Gappy Units, asg = All Gappy Units, hsg = Heuristic for pruning Gappy Units Source German Spanish French Gaps 965515 1705156 1473798 No-Gaps 256992 313690 343220 Heuristic (hsg) 281618 346993 385869 Table 5: 10-best Translation Options With & Without Gaps and using our Heuristic a score (future cost estimate) lower than the sum of the best scores of the parts. This heuristic removes many useless discontinuous tuples. We found that results improved (Twno−rl−hsg and Twrl−6−hsg in Table 4) compared to the version using all gaps (Twno−rl−asg, Twrl−6−asg), and are closer to the results without discontinuous tuples (Twno−rl and Twrl−6 in Table 3). 8 Sample Output In this section we compare the output of our systems and Moses. Example 1 in Figure 4 shows the powerful reordering mechanism of our model which moves the English verb phrase “do not want to negotiate” to its correct position between the subject “they” and the prepositional phrase “about concrete figures”. Moses failed to produce the correct word order in this example. Notice that although our model is using smaller translation units “nicht – do not”, “verhandlen – negotiate” and “wollen – want to”, it is able to memorize the phrase translation “nicht verhandlen wollen – do not want to negotiate” as a sequence of translation and reordering operations. It learns the reordering of “verhandlen – negotiate” and “wollen – want to” and also captures dependencies across phrase boundaries. Example 2 shows how our system without a reordering limit moves the English translation “vote” of the German clause-final verb “stimmen” across about 20 English tokens to its correct position behind the auxiliary “would”. Example 3 shows how the system with gappy tuples translates a German sentence with the particle verb “kehrten...zur¨uck” using a single tuple (dashed lines). Handling phenomena like particle verbs 1052 Figure 4: Sample Output Sentences strongly motivates our treatment of source side gaps. The system without gappy units happens to produce the same translation by translating “kehrten” to “returned” and deleting the particle “zur¨uck” (solid lines). This is surprising because the operation for translating “kehrten” to “returned” and for deleting the particle are too far apart to influence each other in an n-gram model. Moses run on the same example deletes the main verb (“kehrten”), an error that we frequently observed in the output of Moses. Our last example (Figure 5) shows that our model learns idioms like “meiner Meinung nach – In my opinion ,” and short phrases like “gibt es – there are” showing its ability to memorize these “phrasal” translations, just like Moses. 9 Conclusion We have presented a new model for statistical MT which can be used as an alternative to phrasebased translation. Similar to N-gram based MT, it addresses three drawbacks of traditional phrasal MT by better handling dependencies across phrase boundaries, using source-side gaps, and solving the phrasal segmentation problem. In contrast to Ngram based MT, our model has a generative story which tightly couples translation and reordering. Furthermore it considers all possible reorderings unlike N-gram systems that perform search only on Figure 5: Learning Idioms a limited number of pre-calculated orderings. Our model is able to correctly reorder words across large distances, and it memorizes frequent phrasal translations including their reordering as probable operations sequences. Our system outperformed Moses on standard Spanish-to-English and Germanto-English tasks and achieved comparable results for French-to-English. A binary version of the corpus conversion algorithm and the decoder is available.15 Acknowledgments The authors thank Fabienne Braune and the reviewers for their comments. Nadir Durrani was funded by the Higher Education Commission (HEC) of Pakistan. Alexander Fraser was funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. Helmut Schmid was supported by Deutsche Forschungsgemeinschaft grant SFB 732. 15http://www.ims.uni-stuttgart.de/∼durrani/resources.html 1053 References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263–311. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Josep Maria Crego and Franois Yvon. 2009. Gappy translation units under left-to-right smt decoding. In Proceedings of the meeting of the European Association for Machine Translation (EAMT), pages 66–73, Barcelona, Spain. Josep Maria Crego and Franc¸ois Yvon. 2010. Improving reordering with linguistically informed bilingual n-grams. In Coling 2010: Posters, pages 197–205, Beijing, China, August. Coling 2010 Organizing Committee. Josep M. Crego, Marta R. Costa-juss, Jos B. Mario, and Jos A. R. Fonollosa. 2005a. Ngram-based versus phrasebased statistical machine translation. In In Proceedings of the International Workshop on Spoken Language Technology (IWSLT05, pages 177–184. Josep M. Crego, Jos´e B. Mariˆno, and Adri`a de Gispert. 2005b. Reordered search and unfolding tuples for ngram-based SMT. In Proceedings of the 10th Machine Translation Summit (MT Summit X), pages 283– 289, Phuket, Thailand. Michel Galley and Christopher D. Manning. 2010. Accurate non-hierarchical phrase-based translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 966– 974, Los Angeles, California, June. Association for Computational Linguistics. Philipp Koehn and Barry Haddow. 2009. Edinburgh’s submission to all tracks of the WMT 2009 shared task with reordering and speed improvements to Moses. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 160–164, Athens, Greece, March. Association for Computational Linguistics. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the Human Language Technology and North American Association for Computational Linguistics Conference, pages 127–133, Edmonton, Canada. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In International Workshop on Spoken Language Translation 2005. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Demonstration Program, Prague, Czech Republic. Philipp Koehn. 2004a. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In AMTA, pages 115–124. Philipp Koehn. 2004b. Statistical significance tests for machine translation evaluation. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Association for Computational Linguistics. Zhifei Li, Chris Callison-burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren N. G. Thornton, Jonathan Weese, and Omar F. Zaidan. 2009. Joshua: An open source toolkit for parsing-based machine translation. J.B. Mari˜no, R.E. Banchs, J.M. Crego, A. de Gispert, P. Lambert, J.A.R. Fonollosa, and M.R. Costa-juss`a. 2006. N-gram-based machine translation. Computational Linguistics, 32(4):527–549. I. Dan Melamed. 2004. Statistical machine translation by parsing. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(1):417–449. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318, Morristown, NJ, USA. Association for Computational Linguistics. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Intl. Conf. Spoken Language Processing, Denver, Colorado. Omar F. Zaidan. 2009. Z-MERT: A fully configurable open source tool for minimum error rate training of machine translation systems. The Prague Bulletin of Mathematical Linguistics, 91:79–88. 1054
2011
105
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1055–1065, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Integrating surprisal and uncertain-input models in online sentence comprehension: formal techniques and empirical results Roger Levy Department of Linguistics University of California at San Diego 9500 Gilman Drive # 0108 La Jolla, CA 92093-0108 [email protected] Abstract A system making optimal use of available information in incremental language comprehension might be expected to use linguistic knowledge together with current input to revise beliefs about previous input. Under some circumstances, such an error-correction capability might induce comprehenders to adopt grammatical analyses that are inconsistent with the true input. Here we present a formal model of how such input-unfaithful garden paths may be adopted and the difficulty incurred by their subsequent disconfirmation, combining a rational noisy-channel model of syntactic comprehension under uncertain input with the surprisal theory of incremental processing difficulty. We also present a behavioral experiment confirming the key empirical predictions of the theory. 1 Introduction In most formal theories of human sentence comprehension, input recognition and syntactic analysis are taken to be distinct processes, with the only feedback from syntax to recognition being prospective prediction of likely upcoming input (Jurafsky, 1996; Narayanan and Jurafsky, 1998, 2002; Hale, 2001, 2006; Levy, 2008a). Yet a system making optimal use of all available information might be expected to perform fully joint inference on sentence identity and structure given perceptual input, using linguistic knowledge both prospectively and retrospectively in drawing inferences as to how raw input should be segmented and recognized as a sequence of linguistic tokens, and about the degree to which each input token should be trusted during grammatical analysis. Formal models of such joint inference over uncertain input have been proposed (Levy, 2008b), and corroborative empirical evidence exists that strong coherence of current input with a perceptual neighbor of previous input may induce confusion in comprehenders as to the identity of that previous input (Connine et al., 1991; Levy et al., 2009). In this paper we explore a more dramatic prediction of such an uncertain-input theory: that, when faced with sufficiently biasing input, comprehenders might under some circumstances adopt a grammatical analysis inconsistent with the true raw input comprising a sentence they are presented with, but consistent with a slightly perturbed version of the input that has higher prior probability. If this is the case, then subsequent input strongly disconfirming this “hallucinated” garden-path analysis might be expected to induce the same effects as seen in classic cases of garden-path disambiguation traditionally studied in the psycholinguistic literature. We explore this prediction by extending the rational uncertain-input model of Levy (2008b), integrating it with SURPRISAL THEORY (Hale, 2001; Levy, 2008a), which successfully accounts for and quantifies traditional garden-path disambiguation effects; and by testing predictions of the extended model in a self-paced reading study. Section 2 reviews surprisal theory and how it accounts for traditional gardenpath effects. Section 3 provides background information on garden-path effects relevant to the current study, describes how we might hope to reveal comprehenders’ use of grammatical knowledge to revise beliefs about the identity of previous linguistic sur1055 face input and adopt grammatical analyses inconsistent with true input through a controlled experiment, and informally outlines how such belief revisions might arise as a side effect in a general theory of rational comprehension under uncertain input. Section 4 defines and estimates parameters for a model instantiating the general theory, and describes the predictions of the model for the experiment described in Section 3 (along with the inference procedures required to determine those predictions). Section 5 reports the results of the experiment. Section 6 concludes. 2 Garden-path disambiguation under surprisal The SURPRISAL THEORY of incremental sentenceprocessing difficulty (Hale, 2001; Levy, 2008a) posits that the cognitive effort required to process a given word wi of a sentence in its context is given by the simple information-theoretic measure of the log of the inverse of the word’s conditional probability (also called its “surprisal” or “Shannon information content”) in its intra-sentential context w1,...,i−1 and extra-sentential context Ctxt: Effort(wi) ∝log 1 P(wi|w1...i−1, Ctxt) (In the rest of this paper, we consider isolatedsentence comprehension and ignore Ctxt.) The theory derives empirical support not only from controlled experiments manipulating grammatical context but also from broad-coverage studies of reading times for naturalistic text (Demberg and Keller, 2008; Boston et al., 2008; Frank, 2009; Roark et al., 2009), including demonstration that the shape of the relationship between word probability and reading time is indeed log-linear (Smith and Levy, 2008). Surprisal has had considerable success in accounting for one of the best-known phenomena in psycholinguistics, the GARDEN-PATH SENTENCE (Frazier, 1979), in which a local ambiguity biases the comprehender’s incremental syntactic interpretation so strongly that upon encountering disambiguating input the correct interpretation can only be recovered with great effort, if at all. The most famous example is (1) below (Bever, 1970): (1) The horse raced past the barn fell. where the context before the final word is strongly biased toward an interpretation where raced is the main verb of the sentence (MV; Figure 1a), the intended interpretation, where raced begins a reduced relative clause (RR; Figure 1b) and fell is the main verb, is extremely difficult to recover. Letting Tj range over the possible incremental syntactic analyses of words w1...6 preceding fell, under surprisal the conditional probability of the disambiguating continuation fell can be approximated as P(fell|w1...6) = X j P(fell|Tj, w1...6)P(Tj|w1...6) (I) For all possible predisambiguation analyses Tj, either the analysis is disfavored by the context (P(Tj|w1...6) is low) or the analysis makes the disambiguating word unlikely (P(fell|Tj, w1...6) is low). Since every summand in the marginalization of Equation (I) has a very small term in it, the total marginal probability is thus small and the surprisal is high. Hale (2001) demonstrated that surprisal thus predicts strong garden-pathing effects in the classic sentence The horse raced past the barn fell on basis of the overall rarity of reduced relative clauses alone. More generally, Jurafsky (1996) used a combination of syntactic probabilities (reduced RCs are rare) and argument-structure probabilities (raced is usually intransitive) to estimate the probability ratio of the two analyses of pre-disambiguation context in Figure 1 as roughly 82:1, putting a lower bound on the additional surprisal incurred at fell for the reduced-RC variant over the unreduced variant (The horse that was raced past the barn fell) of 6.4 bits.1 3 Garden-pathing and input uncertainty We now move on to cases where garden-pathing can apparently be blocked by only small changes to the surface input, which we will take as a starting point for developing an integrated theory of uncertaininput inference and surprisal. The backdrop is what is known in the psycholinguistic literature as the NP/Z ambiguity, exemplified in (2) below: 1We say that this is a “lower bound” because incorporating even finer-grained information—such as the fact that horse is a canonical subject for intransitive raced—into the estimate would almost certainly push the probability ratio even farther in favor of the main-clause analysis. 1056 S NP DT The NN horse VP VBD raced PP IN past NP DT the NN barn ... (a) MV interpretation S NP DT The NN horse RRC S VP VBN raced PP IN past NP DT the NN barn VP ... (b) RR interpretation Figure 1: Classic garden pathing (2) While Mary was mending the socks fell off her lap. In incremental comprehension, the phrase the socks is ambiguous between being the NP object of the preceding subordinate-clause verb mending versus being the subject of the main clause (in which case mending has a Zero object); in sentences like (2) the initial bias is toward the NP interpretation. The main-clause verb fell disambiguates, ruling out the initially favored NP analysis. It has been known since Frazier and Rayner (1982) that this effect of garden-path disambiguation can be measured in reading times on the main-clause verb (see also Mitchell, 1987; Ferreira and Henderson, 1993; Adams et al., 1998; Sturt et al., 1999; Hill and Murray, 2000; Christianson et al., 2001; van Gompel and Pickering, 2001; Tabor and Hutchins, 2004; Staub, 2007). Small changes to the context can have huge effects on comprehenders’ initial interpretations, however. It is unusual for sentenceinitial subordinate clauses not to end with a comma or some other type of punctuation (searches in the parsed Brown corpus put the rate at about 18%); empirically it has consistently been found that a comma eliminates the garden-path effect in NP/Z sentences: (3) While Mary was mending, the socks fell off her lap. Understanding sentences like (3) is intuitively much easier, and reading times at the disambiguating verb are reliably lower when compared with (2). Fodor (2002) summarized the power of this effect succinctly: [w]ith a comma after mending, there would be no syntactic garden path left to be studied. (Fodor, 2002) In a surprisal model with clean, veridical input, Fodor’s conclusion is exactly what is predicted: separating a verb from its direct object with a comma effectively never happens in edited, published written English, so the conditional probability of the NP analysis should be close to zero.2 When uncertainty about surface input is introduced, however— due to visual noise, imperfect memory representations, and/or beliefs about possible speaker error— analyses come into play in which some parts of the true string are treated as if they were absent. In particular, because the two sentences are perceptual neighbors, the pre-disambiguation garden-path analysis of (2) may be entertained in (3). We can get a tighter handle on the effect of input uncertainty by extending Levy (2008b)’s analysis of the expected beliefs of a comprehender about the sequence of words constituting an input sentence to joint inference over both sentence identity and sentence structure. For a true sentence w∗which yields perceptual input I, joint inference on sentence identity w and structure T marginalizing over I yields: PC(T, w|w∗) = Z I PC(T, w|I, w∗)PT (I|w∗) dI where PT (I|w∗) is the true model of noise (perceptual inputs derived from the true sentence) and PC(·) terms reflect the comprehender’s linguistic knowledge and beliefs about the noise processes intervening between intended sentences and perceptual input. w∗and w must be conditionally independent given I since w∗is not observed by the comprehender, giving us (through Bayes’ Rule): P(T, w|w∗) = Z I PC(I|T, w)PC(T, w) PC(I) PT (I|w∗) dI For present purposes we constrain the comprehender’s model of noise so that T and I are conditionally independent given w, an assumption that can be relaxed in future work.3 This allows us the further 2A handful of VP -> V , NP ... rules can be found in the Penn Treebank, but they all involve appositives (It [VP ran, this apocalyptic beast ...]), vocatives (You should [VP understand, Jack, ...]), cognate objects (She [VP smiled, a smile without humor]), or indirect speech (I [VP thought, you nasty brute...]); none involve true direct objects of the type in (3). 3This assumption is effectively saying that noise processes are syntax-insensitive, which is clearly sensible for environmental noise but would need to be relaxed for some types of speaker error. 1057 simplification to P(T, w|w∗) = (i) z }| { PC(T, w) (ii) z }| { Z I PC(I|w)PT (I|w∗) PC(I) dI (II) That is, a comprehender’s average inferences about sentence identity and structure involve a tradeoff between (i) the prior probability of a grammatical derivation given a speaker’s linguistic knowledge and (ii) the fidelity of the derivation’s yield to the true sentence, as measured by a combination of true noise processes and the comprehender’s beliefs about those processes. 3.1 Inducing hallucinated garden paths through manipulating prior grammatical probabilities Returning to our discussion of the NP/Z ambiguity, the relative ease of comprehending (3) entails an interpretation in the uncertain-input model that the cost of infidelity to surface input is sufficient to prevent comprehenders from deriving strong belief in a hallucinated garden-path analysis of (3) predisambiguation in which the comma is ignored. At the same time, the uncertain-input theory predicts that if we manipulate the balance of prior grammatical probabilities PC(T, w) strongly enough (term (i) in Equation (II)), it may shift the comprehender’s beliefs toward a garden-path interpretation. This observation sets the stage for our experimental manipulation, illustrated below: (4) As the soldiers marched, toward the tank lurched an injured enemy combatant. Example (4) is qualitatively similar to (3), but with two crucial differences. First, there has been LOCATIVE INVERSION (Bolinger, 1971; Bresnan, 1994) in the main clause: a locative PP has been fronted before the verb, and the subject NP is realized postverbally. Locative inversion is a low-frequency construction, hence it is crucially disfavored by the comprehender’s prior over possible grammatical structures. Second, the subordinate-clause verb is no longer transitive, as in (3); instead it is intransitive but could itself take the main-clause fronted PP as a dependent. Taken together, these properties should shift comprehenders’ posterior inferences given prior grammatical knowledge and predisambiguation input more sharply than in (3) toward the input-unfaithful interpretation in which the immediately preverbal main-clause constituent (toward the tank in (4)) is interpreted as a dependent of the subordinate-clause verb, as if the comma were absent. If comprehenders do indeed seriously entertain such interpretations, then we should be able to find the empirical hallmarks (e.g., elevated reading times) of garden-path disambiguation at the mainclause verb lurched, which is incompatible with the “hallucinated” garden-path interpretation. Empirically, however, it is important to disentangle these empirical hallmarks of garden-path disambiguation from more general disruption that may be induced by encountering locative inversion itself. We address this issue by introducing a control condition in which a postverbal PP is placed within the subordinate clause: (5) As the soldiers marched into the bunker, toward the tank lurched an injured enemy combatant. [+PP] Crucially, this PP fills a similar thematic role for the subordinate-clause verb marched as the main-clause fronted PP would, reducing the extent to which the comprehender’s prior favors the input-unfaithful interpretation (that is, the prior ratio P(marched into the bunker toward the tank|VP) P(marched into the bunker|VP) for (5) is much lower than the corresponding prior ratio P(marched toward the tank|VP) P(marched|VP) for (4)), while leaving locative inversion present. Finally, to ensure that sentence length itself does not create a confound driving any observed processing-time difference, we cross presence/absence of the subordinate-clause PP with inversion in the main clause: (6) a. As the soldiers marched, the tank lurched toward an injured enemy combatant. [Uninverted,−PP] b. As the soldiers marched into the bunker, the tank lurched toward an injured enemy combatant. [Uninverted,+PP] 4 Model instantiation and predictions To determine the predictions of our uncertaininput/surprisal model for the above sentence types, we extracted a small grammar from the parsed 1058 TOP →S . 1.000000 S →INVERTED NP 0.003257 S →SBAR S 0.012289 S →SBAR , S 0.041753 S →NP VP 0.942701 INVERTED →PP VBD 1.000000 SBAR →INSBAR S 1.000000 VP →VBD RB 0.002149 VP →VBD PP 0.202024 VP →VBD NP 0.393660 VP →VBD PP PP 0.028029 VP →VBD RP 0.005731 VP →VBD 0.222441 VP →VBD JJ 0.145966 PP →IN NP 1.000000 NP →DT NN 0.274566 NP →NNS 0.047505 NP →NNP 0.101198 NP →DT NNS 0.045082 NP →PRP 0.412192 NP →NN 0.119456 Table 1: A small PCFG (lexical rewrite rules omitted) covering the constructions used in (4)–(6), with probabilities estimated from the parsed Brown corpus. Brown corpus (Kuˇcera and Francis, 1967; Marcus et al., 1994), covering sentence-initial subordinate clause and locative-inversion constructions.4,5 The non-terminal rewrite rules are shown in Table 1, along with their probabilities; of terminal rewrite rules for all words which either appear in the sentences to be parsed or appeared at least five times in the corpus, with probabilities estimated by relative frequency. As we describe in the following two sections, un4Rule counts were obtained using tgrep2/Tregex patterns (Rohde, 2005; Levy and Andrew, 2006); the probabilities given are relative frequency estimates. The patterns used can be found at http://idiom.ucsd.edu/˜rlevy/papers/ acl2011/tregex_patterns.txt. 5Similar to the case noted in Footnote 2, a small number of VP -> V , PP ... rules can be found in the parsed Brown corpus. However, the PPs involved are overwhelmingly (i) set expressions, such as for example, in essence, and of course, or (ii) manner or temporal adjuncts. The handful of true locative PPs (5 in total) are all parentheticals intervening between the verb and a complement strongly selected by the verb (e.g., [VP means, in my country, homosexual]); none fulfill one of the verb’s thematic requirements. certain input is represented as a weighted finite-state automaton (WFSA), allowing us to represent the incremental inferences of the comprehender through intersection of the input WFSA with the PCFG above (Bar-Hillel et al., 1964; Nederhof and Satta, 2003, 2008). 4.1 Uncertain-input representations Levy (2008a) introduced the LEVENSHTEINDISTANCE KERNEL as a model of the average effect of noise in uncertain-input probabilistic sentence comprehension; this corresponds to term (ii) in our Equation (II). This kernel had a single noise parameter governing scaling of the cost of considering word substitutions, insertions, and deletions are considered, with the cost of a word substitution falling off exponentially with Levenshtein distance between the true word and the substituted word, and the cost of word insertion or deletion falling off exponentially with word length. The distribution over the infinite set of strings w can be encoded in a weighted finite-state automaton, facilitating efficient inference. We use the Levenshtein-distance kernel here to capture the effects of perceptual noise, but make two modifications necessary for incremental inference and for the correct computation of surprisal values for new input: the distribution over already-seen input must be proper, and possible future inputs must be costless. The resulting weighted finite-state representation of noisy input for a true sentence prefix w∗= w1...j is a j + 1-state automaton with arcs as follows: • For each i ∈1, . . . , j: – A substitution arc from i−1 to i with cost proportional to exp[−LD(w′, wi) γ] for each word w′ in the lexicon, where γ > 0 is a noise parameter and LD(w′, wi) is the Levenshtein distance between w′ and wi (when w′ = wi there is no change to the word); – A deletion arc from i−1 to i labeled ǫ with cost proportional to exp[−len(wi)/γ]; – An insertion loop arc from i −1 to i −1 with cost proportional to exp[−len(w′)/γ] for every word w′ in the lexicon; • A loop arc from j to j for each word w′ in 1059 ǫ/0.063 it/0.467 hit/0.172 him/0.063 it/0.135 hit/0.050 him/0.050 it/0.135 hit/0.050 him/0.050 ǫ/0.021 it/0.158 hit/0.428 him/0.158 it/1.000 hit/1.000 him/1.000 1 0 2 Figure 2: Noisy WFSA for partial input it hit. . . with lexicon {it,hit,him}, noise parameter γ=1 the lexicon, with zero cost (value 1 in the real semiring); • State j is a zero-cost final state; no other states are final. The addition of loop arcs at state n allows modeling of incremental comprehension through the automaton/grammar intersection (see also Hale, 2006); and the fact that these arcs are costless ensures that the partition function of the intersection reflects only the grammatical prior plus the costs of input already seen. In order to ensure that the distribution over already-seen input is proper, we normalize the costs on outgoing arcs from all states but j.6 Figure 2 gives an example of a simple WFSA representation for a short partial input with a small lexicon. 4.2 Inference Computing the surprisal incurred by the disambiguating element given an uncertain-input representation of the sentence involves a standard application of the definition of conditional probability (Hale, 2001): log 1 P(I1...i|I1...i−1) = log P(I1...i−1) P(I1...i) (III) Since our uncertain inputs I1...k are encoded by a WFSA, the probability P(I1...k) is equal to the partition function of the intersection of this WFSA with the PCFG given in Table 1.7 PCFGs are a special class of weighted context-free grammars (WCFGs), 6If a state’s total unnormalized cost of insertion arcs is α and that of deletion and insertion arcs is β, its normalizing constant is β 1−α. Note that we must have α < 1, placing a constraint on the value that γ can take (above which the normalizing constant diverges). 7Using the WFSA representation of average noise effects here actually involves one simplifying assumption, that the avwhich are closed under intersection with WFSAs; a constructive procedure exists for finding the intersection (Bar-Hillel et al., 1964; Nederhof and Satta, 2003). Hence we are left with finding the partition function of a WCFG, which cannot be computed exactly, but a number of approximation methods are known (Stolcke, 1995; Smith and Johnson, 2007; Nederhof and Satta, 2008). In practice, the computation required to compute the partition function under any of these methods increases with the size of the WCFG resulting from the intersection, which for a binarized PCFG with R rules and an n-state WFSA is Rn2. To increase efficiency we implemented what is to our knowledge a novel method for finding the minimal grammar including all rules that will have non-zero probability in the intersection. We first parse the WFSA bottom-up with the item-based method of Goodman (1999) in the Boolean semiring, storing partial results in a chart. After completion of this bottom-up parse, every rule that will have non-zero probability in the intersection PCFG will be identifiable with a set of entries in the chart, but not all entries in this chart will have non-zero probability, since some are not connected to the root. Hence we perform a second, topdown Boolean-semiring parsing pass on the bottomup chart, throwing out entries that cannot be derived from the root. We can then include in the intersection grammar only those rules from the classic construction that can be identified with a set of surviving entries in the final parse chart.8 The partition functions for each category in this intersection grammar can then be computed; we used a fixed-point method preceded by a topological sort on the grammar’s ruleset, as described by Nederhof and Satta (2008). To obtain the surprisal of the input deriving from a word wi in its context, we can thus comerage surprisal of Ii, or EPT h log 1 PC(Ii|I1...i−1) i , is well approximated by the log of the ratio of the expected probabilities of the noisy inputs I1...i−1 and I1...i, since as discussed in Section 3 the quantities P(I1...i−1) and P(I1...i) are expectations under the true noise distribution. This simplifying assumption has the advantage of bypassing commitment to a specific representation of perceptual input and should be justifiable for reasonable noise functions, but the issue is worth further scrutiny. 8Note that a standard top-down algorithm such as Earley parsing cannot be used to avoid the need for both bottom-up and top-down passes, since the presence of loops in the WFSA breaks the ability to operate strictly left-to-right. 1060 0.10 0.15 0.20 0.25 8.5 9.0 9.5 10.0 10.5 11.0 Noise level γ (high=noisy) Surprisal at main−clause verb Inverted, +PP Uninverted, +PP Inverted, −PP Uninverted, −PP Figure 3: Model predictions for (4)–(6) pute the partition functions for noisy inputs I1...i−1 and I1...i corresponding to words w1...i−1 and words w1...i respectively, and take the log of their ratio as in Equation (III). 4.3 Predictions The noise level γ is a free parameter in this model, so we plot model predictions—the expected surprisal of input from the main-clause verb for each variant of the target sentence in (4)–(6)—over a wide range of its possible values (Figure 3). The far left of the graph asymptotes toward the predictions of clean surprisal, or noise-free input. With little to no input uncertainty, the presence of the comma rules out the garden-path analysis of the fronted PP toward the tank, and the surprisal at the main-clause verb is the same across condition (here reflecting only the uncertainty of verb identity for this small grammar). As input uncertainty increases, however, surprisal in the [Inverted, −PP] condition increases, reflecting the stronger belief given preceding context in an input-unfaithful interpretation. 5 Empirical results To test these predictions we conducted a word-byword self-paced reading study, in which participants read by pressing a button to reveal each successive word in a sentence; times between button presses are recorded and analyzed as an index of incremental processing difficulty (Mitchell, 1984). Forty monolingual native-English speaker participants read twenty-four sentence quadruplets (“items”) on the pattern of (4)–(6), with a Latinsquare design so that each participant saw an equal Inverted Uninverted -PP 0.76 0.93 +PP 0.85 0.92 Table 2: Question-answering accuracy number of sentences in each condition and saw each item only once. Experimental items were pseudorandomly interspersed with 62 filler sentences; no two experimental items were ever adjacent. Punctuation was presented with the word to its left, so that for (4) the four and fifth button presses would yield --------------- marched, --------------and ------------------------ toward -------respectively (right-truncated here for reasons of space). Every sentence was followed by a yes/no comprehension question (e.g., Did the tank lurch toward an injured enemy combatant?); participants received feedback whenever they answered a question incorrectly. Reading-time results are shown in Figure 4. As can be seen, the model’s predictions are matched at the main-clause verb: reading times are highest in the [Inverted, −PP] condition, and there is an interaction between main-clause inversion and presence of a subordinate-clause PP such that presence of the latter reduces reading times more for inverted than for uninverted main clauses. This interaction is significant in both by-participants and by-items ANOVAs (both p < 0.05) and in a linear mixedeffects analysis with participants- and item-specific random interactions (t > 2; see Baayen et al., 2008). The same pattern persists and remains significant through to the end of the sentence, indicating considerable processing disruption, and is also observed in question-answering accuracies for experimental sentences, which are superadditively lowest in the [Inverted, −PP] condition (Table 2). The inflated reading times for the [Inverted, −PP] condition beginning at the main-clause verb confirm the predictions of the uncertaininput/surprisal theory. Crucially, the input that would on our theory induce the comprehender to question the comma (the fronted main-clause PP) 1061 400 500 600 700 Reading time (ms) As the soldiers marched(,) into the bunker, toward the tank lurched toward an enemy combatant. Inverted, +PP Uninverted, +PP Inverted, −PP Uninverted, −PP Figure 4: Average reading times for each part of the sentence, broken down by experimental condition is not seen until after the comma is no longer visible (and presumably has been integrated into beliefs about syntactic analysis on veridical-input theories). This empirical result is hence difficult to accommodate in accounts which do not share our theory’s crucial property that comprehenders can revise their belief in previous input on the basis of current input. 6 Conclusion Language is redundant: the content of one part of a sentence carries predictive value both for what will precede and what will follow it. For this reason, and because the path from a speaker’s intended utterance to a comprehender’s perceived input is noisy and error-prone, a comprehension system making optimal use of available information would use current input not only for forward prediction but also to assess the veracity of previously encountered input. Here we have developed a theory of how such an adaptive error-correcting capacity is a consequence of noisy-channel inference, with a comprehender’s beliefs regarding sentence form and structure at any moment in incremental comprehension reflecting a balance between fidelity to perceptual input and a preference for structures with higher prior probability. As a consequence of this theory, certain types of sentence contexts will cause the drive toward higher prior-probability analyses to overcome the drive to maintain fidelity to input, undermining the comprehender’s belief in an earlier part of the input actually perceived in favor of an analysis unfaithful to part of the true input. If subsequent input strongly disconfirms this incorrect interpretation, we should see behavioral signatures of classic garden-path disambiguation. Within the theory, the size of this “hallucinated” garden-path effect is indexed by the surprisal value under uncertain input, marginalizing over the actual sentence observed. Based on a model implementing theory we designed a controlled psycholinguistic experiment making specific predictions regarding the role of fine-grained grammatical context in modulating comprehenders’ strength of belief in a highly specific bit of linguistic input—a comma marking the end of a sentence-initial subordinate clause— and tested those predictions in a self-paced reading experiment. As predicted by the theory, reading times at the word disambiguating the “hallucinated” garden-path were inflated relative to control conditions. These results contribute to the theory of uncertain-input effects in online sentence processing by suggesting that comprehenders may be induced not only to entertain but to adopt relatively strong beliefs in grammatical analyses that require modification of the surface input itself. Our results also bring a new degree of nuance to surprisal theory, demonstrating that perceptual neighbors of true preceding input may need to be taken into account in order to estimate how surprising a comprehender will find subsequent input to be. Beyond the domain of psycholinguistics, the methods employed here might also be usefully applied to practical problems such as parsing of degraded or fragmentary sentence input, allowing joint constraint derived from grammar and available input to fill in gaps (Lang, 1988). Of course, practical applications of this sort would raise challenges of their own, such as extending the grammar to broader coverage, which is delicate here since the surface input places a weaker check on overgeneration from the grammar than in traditional probabilistic parsing. Larger grammars also impose a technical burden since parsing uncertain input is in practice more computationally intensive than parsing clean input, raising the question of what approximate-inference algorithms might be well-suited to processing uncertain input with grammatical knowledge. Answers to this question might in turn be of interest for sentence processing, since the exhaustive-parsing idealization employed here is not psychologically plausible. It seems likely that human comprehension in1062 volves approximate inference with severely limited memory that is nonetheless highly optimized to recover something close to the intended meaning of an utterance, even when the recovered meaning is not completely faithful to the input itself. Arriving at models that closely approximate this capacity would be of both theoretical and practical value. Acknowledgments Parts of this work have benefited from presentation at the 2009 Annual Meeting of the Linguistic Society of America and the 2009 CUNY Sentence Processing Conference. I am grateful to Natalie Katz and Henry Lu for assistance in preparing materials and collecting data for the self-paced reading experiment described here. This work was supported by a UCSD Academic Senate grant, NSF CAREER grant 0953870, and NIH grant 1R01HD065829-01. References Adams, B. C., Clifton, Jr., C., and Mitchell, D. C. (1998). Lexical guidance in sentence processing? Psychonomic Bulletin & Review, 5(2):265–270. Baayen, R. H., Davidson, D. J., and Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4):390–412. Bar-Hillel, Y., Perles, M., and Shamir, E. (1964). On formal properties of simple phrase structure grammars. In Language and Information: Selected Essays on their Theory and Application. Addison-Wesley. Bever, T. (1970). The cognitive basis for linguistic structures. In Hayes, J., editor, Cognition and the Development of Language, pages 279–362. John Wiley & Sons. Bolinger, D. (1971). A further note on the nominal in the progressive. Linguistic Inquiry, 2(4):584– 586. Boston, M. F., Hale, J. T., Kliegl, R., Patil, U., and Vasishth, S. (2008). Parsing costs as predictors of reading difficulty: An evaluation using the Potsdam sentence corpus. Journal of Eye Movement Research, 2(1):1–12. Bresnan, J. (1994). Locative inversion and the architecture of universal grammar. Language, 70(1):72–131. Christianson, K., Hollingworth, A., Halliwell, J. F., and Ferreira, F. (2001). Thematic roles assigned along the garden path linger. Cognitive Psychology, 42:368–407. Connine, C. M., Blasko, D. G., and Hall, M. (1991). Effects of subsequent sentence context in auditory word recognition: Temporal and linguistic constraints. Journal of Memory and Language, 30(2):234–250. Demberg, V. and Keller, F. (2008). Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193–210. Ferreira, F. and Henderson, J. M. (1993). Reading processes during syntactic analysis and reanalysis. Canadian Journal of Experimental Psychology, 16:555–568. Fodor, J. D. (2002). Psycholinguistics cannot escape prosody. In Proceedings of the Speech Prosody Conference. Frank, S. L. (2009). Surprisal-based comparison between a symbolic and a connectionist model of sentence processing. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, pages 1139–1144. Frazier, L. (1979). On Comprehending Sentences: Syntactic Parsing Strategies. PhD thesis, University of Massachusetts. Frazier, L. and Rayner, K. (1982). Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences. Cognitive Psychology, 14:178–210. Goodman, J. (1999). Semiring parsing. Computational Linguistics, 25(4):573–605. Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics, pages 159–166. Hale, J. (2006). Uncertainty about the rest of the sentence. Cognitive Science, 30(4):609–642. 1063 Hill, R. L. and Murray, W. S. (2000). Commas and spaces: Effects of punctuation on eye movements and sentence parsing. In Kennedy, A., Radach, R., Heller, D., and Pynte, J., editors, Reading as a Perceptual Process. Elsevier. Jurafsky, D. (1996). A probabilistic model of lexical and syntactic access and disambiguation. Cognitive Science, 20(2):137–194. Kuˇcera, H. and Francis, W. N. (1967). Computational Analysis of Present-day American English. Providence, RI: Brown University Press. Lang, B. (1988). Parsing incomplete sentences. In Proceedings of COLING. Levy, R. (2008a). Expectation-based syntactic comprehension. Cognition, 106:1126–1177. Levy, R. (2008b). A noisy-channel model of rational human sentence comprehension under uncertain input. In Proceedings of the 13th Conference on Empirical Methods in Natural Language Processing, pages 234–243. Levy, R. and Andrew, G. (2006). Tregex and Tsurgeon: tools for querying and manipulating tree data structures. In Proceedings of the 2006 conference on Language Resources and Evaluation. Levy, R., Bicknell, K., Slattery, T., and Rayner, K. (2009). Eye movement evidence that readers maintain and act on uncertainty about past linguistic input. Proceedings of the National Academy of Sciences, 106(50):21086–21090. Marcus, M. P., Santorini, B., and Marcinkiewicz, M. A. (1994). Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Mitchell, D. C. (1984). An evaluation of subjectpaced reading tasks and other methods for investigating immediate processes in reading. In Kieras, D. and Just, M. A., editors, New methods in reading comprehension. Hillsdale, NJ: Earlbaum. Mitchell, D. C. (1987). Lexical guidance in human parsing: Locus and processing characteristics. In Coltheart, M., editor, Attention and Performance XII: The psychology of reading. London: Erlbaum. Narayanan, S. and Jurafsky, D. (1998). Bayesian models of human sentence processing. In Proceedings of the Twelfth Annual Meeting of the Cognitive Science Society. Narayanan, S. and Jurafsky, D. (2002). A Bayesian model predicts human parse preference and reading time in sentence processing. In Advances in Neural Information Processing Systems, volume 14, pages 59–65. Nederhof, M.-J. and Satta, G. (2003). Probabilistic parsing as intersection. In Proceedings of the International Workshop on Parsing Technologies. Nederhof, M.-J. and Satta, G. (2008). Computing partition functions of PCFGs. Research on Logic and Computation, 6:139–162. Roark, B., Bachrach, A., Cardenas, C., and Pallier, C. (2009). Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. In Proceedings of EMNLP. Rohde, D. (2005). TGrep2 User Manual, version 1.15 edition. Smith, N. A. and Johnson, M. (2007). Weighted and probabilistic context-free grammars are equally expressive. Computational Linguistics, 33(4):477–491. Smith, N. J. and Levy, R. (2008). Optimal processing times in reading: a formal model and empirical investigation. In Proceedings of the 30th Annual Meeting of the Cognitive Science Society. Staub, A. (2007). The parser doesn’t ignore intransitivity, after all. Journal of Experimental Psychology: Learning, Memory, & Cognition, 33(3):550– 569. Stolcke, A. (1995). An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165–201. Sturt, P., Pickering, M. J., and Crocker, M. W. (1999). Structural change and reanalysis difficulty in language comprehension. Journal of Memory and Language, 40:136–150. Tabor, W. and Hutchins, S. (2004). Evidence for self-organized sentence processing: Digging in effects. Journal of Experimental Psychology: Learning, Memory, & Cognition, 30(2):431–450. 1064 van Gompel, R. P. G. and Pickering, M. J. (2001). Lexical guidance in sentence processing: A note on Adams, Clifton, and Mitchell (1998). Psychonomic Bulletin & Review, 8(4):851–857. 1065
2011
106
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1066–1076, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Metagrammar Engineering: Towards systematic exploration of implemented grammars Antske Fokkens Department of Computational Linguistics, Saarland University & German Research Center for Artificial Intelligence (DFKI) Project Office Berlin Alt-Moabit 91c, 10559 Berlin, Germany [email protected] Abstract When designing grammars of natural language, typically, more than one formal analysis can account for a given phenomenon. Moreover, because analyses interact, the choices made by the engineer influence the possibilities available in further grammar development. The order in which phenomena are treated may therefore have a major impact on the resulting grammar. This paper proposes to tackle this problem by using metagrammar development as a methodology for grammar engineering. I argue that metagrammar engineering as an approach facilitates the systematic exploration of grammars through comparison of competing analyses. The idea is illustrated through a comparative study of auxiliary structures in HPSG-based grammars for German and Dutch. Auxiliaries form a central phenomenon of German and Dutch and are likely to influence many components of the grammar. This study shows that a special auxiliary+verb construction significantly improves efficiency compared to the standard argument-composition analysis for both parsing and generation. 1 Introduction One of the challenges in designing grammars of natural language is that, typically, more than one formal analysis can account for a given phenomenon. The criteria for choosing between competing analyses are fairly clear (observational adequacy, analytical clarity, efficiency), but given that analyses of different phenomena interact, actually evaluating analyses on those criteria in a systematic manner is far from straightforward. The standard methodology involves either picking one analysis, and seeing how it goes, then backing out if it does not work out, or laboriously adapting a grammar to two versions supporting different analyses (Bender, 2010). The former approach is not in any way systematic, increasing the risk that the grammar is far from optimal in terms of efficiency. The latter approach potentially causes the grammar engineer an amount of work that will not scale for considering many different phenomena. This paper proposes a more systematic and tractable alternative to grammar development: metagrammar engineering. I use “metagrammar” as a generic term to refer to a system that can generate implemented grammars. The key idea is that the grammar engineer adds alternative plausable analyses for linguistic phenomena to a metagrammar. This metagrammar can generate all possible combinations of these analyses automatically, creating different versions of a grammar that cover the same phenomena. The engineer can test directly how competing analyses for different phenomena interact, and determine which combinations are possible (after minor adaptations) and which analyses are incompatible. The idea of metagrammar engineering is illustrated here through a case study of word order and auxiliaries in Germanic languages, which forms the second goal of this paper. Auxiliaries form a central phenomenon of German and Dutch and are likely to influence many components of the grammar. The results show that the analysis of auxiliary+verb structures presented in Bender (2010) significantly im1066 proves efficiency of the grammar compared to the standard argument-composition analysis within the range of phenomena studied. Because future research is needed to determine whether the auxiliary+verb alternative can interact properly with additional phenomena and still lead to more efficient results than argument-composition, it is particularly useful to have a grammar generator that can automatically create grammars with either of the two analyses. The remainder of this paper starts with the case study. Section 2 provides a description of the context of the study. The relevant linguistic properties and alternative analyses are described in Sections 3 and 4. After evaluating and discussing the case study’s results, I return to the general approach of metagrammar engineering. Section 6 presents related work on metagrammars. It is followed by a conclusion and discussion on using metagrammars as a methodology for grammar engineering. 2 A metagrammar for Germanic Languages 2.1 The LinGO Grammar Matrix The LinGO Grammar Matrix (Bender et al., 2002; Bender et al., 2010) provides the main context for the experiments described in this paper. To begin with, its further development plays a significant role for the motivation of the present study. More importantly, the Germanic metagrammar is implemented as a special branch of the LinGO Grammar Matrix and uses a significant amount of its code. The Grammar Matrix customization system allows users to derive a starter grammar for a particular language from a common multi-lingual resource by specifying linguistic properties through a webbased questionnaire. The grammars are intended for parsing and generation with the LKB (Copestake, 2002) using Minimal Recursion Semantics (Copestake et al., 2005, MRS) as parsing output and generation input. After the starter grammar has been created, its development continues independently: engineers can thus make modifications to their grammar without affecting the multi-lingual resource. Internally, the customization system works as follows: The web-based questionnaire registers linguistic properties in a file called “choices” (henceforth choices file). The customization system takes this choices file as input to create grammar fragments, using so-called “libraries” that contain implementations of cross-linguistically variable phenomena. Depending on the definitions provided in the choices file, different analyses are retrieved from the customization system’s libraries. The language specific implementations inherit from a core grammar which handles basic phrase types, semantic compositionality and general infrastructure, such as feature geometry (Bender et al., 2002). The present study is part of a larger effort to improve the customization library for auxiliary structures in free word order and verb second languages. It examines whether Bender’s observations concerning an improved analysis for auxiliaries in Wambaya (Bender, 2010) also hold for Germanic languages. A more elaborate study of German and Dutch (including both Flemish and (Northern) Dutch, which have slightly different word order constraints) is informative, because these languages are well-described and known to have distinctly challenging word order behavior. 2.2 Germanic branch In order to create grammars for Germanic languages, a specialized branch of the Grammar Matrix customization system was developed. This Germanic grammars generator uses the Grammar Matrix’s facilities to generate types in type description language (tdl). At present, the generator uses the Grammar Matrix analyses for agreement and case marking as well as basics from its morphotactics, coordination and lexicon implementations. In the first stage, the word order library and auxiliary implementation were extended to cover two alternative analyses for Germanic word order (see Section 4). The coordination library was adapted to ensure correct interactions with the new word order analyses and agreement. The morphotactics library was extended to cover Dutch and Flemish interactions between word order and morphology. Finally, the lexicon and verbal case pattern implementations were extended to cover ditransitive verbs. Both versions of word order analyses can be tweaked to include or exclude a rarely occurring variant of partial VP fronting (see Section 4.3) resulting in four distinct grammars for each of the 1067 Vorfeld LB Mittelfeld RB Nachfeld Der Mann hat den Jungen gesehen nach der Party The man.nom has the boy.acc seen after the party Der Mann hat den Jungen nach der Party gesehen Den Jungen hat der Mann gesehen nach der Party Nach der Party hat der Mann den Jungen gesehen Den Jungen gesehen hat der Mann nach der Party Gesehen hat der Mann den Jungen nach der Party The man saw the boy after the party Table 1: Basic structure of German word order (not exhaustive) languages under investigation. These 12 grammars cover Dutch, Flemish and German main clauses with up to three core arguments.1 3 Germanic word order 3.1 German word order Topological fields (Erdmann, 1886; Drach, 1937) form the easiest way to describe German word order. The sentence structure for declarative main clauses, consists of five topological fields: Vorfeld (“pre-field”), Left Bracket (LB), Mittelfeld (“middle field”), Right Bracket (RB) and the Nachfeld (“after field”). A subset of permissible alternations in German are provided in Table 1. The last two sentences present an example of partial VP fronting. The fields are defined with regard to verbal forms, which are placed in the Left and Right Brackets. Each topological field has word order restrictions of its own. The Vorfeld must contain exactly one constituent in an affirmative main clause. The Left Bracket contains the finite verb and no other elements. Other verbal forms (if not fronted to the Vorfeld) must be placed in the Right Bracket. Most nonverbal elements are placed in the Mittelfeld. When main verbs are placed in the Vorfeld, their object(s) may stay in the Mittelfeld. This kind of partial VP fronting is illustrated by the last example in Table 1. The Nachfeld typically contains subordinate clauses and sometimes adverbial phrases. In German, the respective order between the verbs in the Right Bracket is head-final, i.e. auxiliaries follow their complements. The only exception is the 1The grammar generation system also creates Danish grammars. Danish results are not presented, because the language does not pose the challenges explained in Section 4. auxiliary flip: under certain conditions in subordinate clauses, the finite verb precedes all other verbal forms. 3.2 Dutch word order Dutch word order reveals the same topological fields as German. There are two main differences between the languages where word order is concerned. First, whereas the order of arguments in the German Mittelfeld allows some flexibility depending on information structure, Dutch argument order is fixed, except for the possibility of placing any argument in the Vorfeld. A related aspect is that Dutch is less flexible as to what partial VPs can be placed in the Vorfeld. The second difference is the word order in the Right Bracket. The order of auxiliaries and their complements is less rigid in Dutch and typically auxiliary-complement, the inverse of German order. Most Dutch auxiliaries can occur in both orders, but this may be restricted according to their verb form. Four groups of auxiliary verbs can be distinguished that have different syntactic restrictions. 1. Verbs selecting for participles which may appear on either side of their complement (e.g. hebben (“have”), zijn (“be”)). 2. Verbs selecting for participles which prefer to follow their complement and must do so if they are in participle form themselves (e.g. blijven (“remain”), krijgen (“get”)). 3. Modals selecting for infinitives which prefer to precede their complement and must do so if they appear in infinitive form themselves. 1068 VF LB MF RB De man zou haar kunnen hebben gezien the man would her.acc can have seen De man zou haar gezien kunnen hebben %De man zou haar kunnen gezien hebben The man should have been able to see her Table 2: Variations of Dutch auxiliary order 4. Verbs selecting for “to infinitives” which must precede their complement. While there is some variation among speakers, the generalizations above are robust. The permitted variations assuming a verb of the 3rd and 1st category in the right bracket are presented in Table 2.2 The variant %De man zou haar kunnen gezien hebben is typical of speakers from Belgium (Haeseryn, 1997); speakers from the Netherlands tend to regard such structures as ungrammatical. Our system can both generate a Flemish grammar accepting all of the above and a (Northern) Dutch grammar, rejecting the third variant. 4 Alternative auxiliary approaches This section presents the alternative analyses for auxiliary-verb structures in Germanic languages compared in this study. For reasons of space, I limit my description to an explanation of the differences and relevance of the compared analyses.3 4.1 Argument-composition The standard analysis for German and Dutch auxiliaries in HPSG is a so-called “argumentcomposition” analysis (Hinrichs and Nakazawa, 1994), which I will explain through the following Dutch example:4 (1) Ik I zou would het the boek book willen want lezen. read. “I would like to read the book.” In the sentence above, the auxiliary willen “want” separates the verb lezen “read” from its object het 2Note that the same orders as in the Right Brackets may also occur in the Vorfeld (with or without the object). 3Details of the implementations can be found by using the metagrammar, which can be found on my homepage. 4Hinrichs and Nakazawa (1994) present an analysis for the German auxiliary flip. The relevant observations are the same. 2 66664 VAL 2 66664 SUBJ 1 COMPS *2 64 HEAD verb VAL " SUBJ 1 COMPS 2 # 3 75, 2 + 3 77775 3 77775 Figure 1: Standard Auxiliary Subcategorization boek “the book”. A parser respecting surface order can thus not combine lezen and het boek before combining willen and lezen. The argument-composition analysis was introduced to make sure that het boek can be picked up as the object of the embedded verb lezen. The subcategorization of an auxiliary under this analysis is presented in Figure 1. The subject of the auxiliary is identical to the subject of the auxiliary’s complement. Its complement list consists of the concatenation of the verbal complement and any complement this verbal complement may select for. In the sentence above, willen will add the subject and the object of lezen to its own subcatorization lists.5 This standard solution for auxiliary-verb structures is (with minor differences) also what is provided by the Matrix customization system. Argument-composition can capture the grammatical behavior of auxiliaries in German and Dutch. However, grammaticality and coverage is not all that matters for grammars of natural language. Efficiency remains an important factor, and argumentcomposition has some undesirable properties on this level. The problem lies in the fact that lexical entries of auxiliaries have underspecified elements on their subcategorization lists. With the current chart parsing and chart generation algorithms (Carroll and Oepen, 2005), an auxiliary in a language with flexible word order will speculatively add edges to the chart for potential analyses with the adjacent constituent as subject or complement. Because the length of the lists are underspecified as well, it can continue wrongly combining with all elements in the string. In the worse case scenario, the number of edges created by an auxiliary grows exponentially in the number of words and constituents in the string. The efficiency problem is even worse for generation: while the parser is restricted by the surface order of 5In the semantic representation, both arguments will be directly related to the main verb exclusively. 1069 ` i ´ 2 4VAL " SUBJ ⟨⟩ COMPS Dˆ HEAD verb ˜E #3 5 ` ii ´ 2 6666664 VAL " SUBJ 1 COMPS 2 # HEAD-DTR|VAL| COMPS 3 NON-HEAD-DTR 3 " VAL " SUBJ 1 COMPS 2 ## 3 7777775 Figure 2: Auxiliary lexical type (i) and Auxiliary+verb construction (ii) under alternative analysis the string, the generator will attempt to combine all lexical items suggested by the input semantics, as well as lexical items with empty semantics, in random order. 4.2 Aux+verb construction Bender (Bender, 2010)6 presents an alternative approach to auxiliary-verb structures for the Australian language Wambaya. The analysis introduces auxiliaries that only subcategorize for one verbal complement, not raising any of the complement’s arguments or its subject. Auxiliaries combine with their complement using a special auxiliary+verb rule. Figure 2 presents this alternative solution. In principle, the new analysis uses the same technique as argument composition. The difference is that the auxiliary now starts out with only one element in its subcategorization lists and can only combine with potential verbal complements that are appropriately constrained. The structure that combines the auxiliary with its complement places the remaining elements on the complement’s SUBJ and COMPS lists on the respective lists of the newly formed phrase, as can be seen in Figure 2 (ii). The constraints on raised arguments are known when the construction applies. The efficiency problem sketched above is thus avoided. 4.3 A small wrinkle: partial VP fronting In its basic form, the auxiliary+verb structure cannot handle partial VP fronting where the main verb is placed in first position leaving one or more verbal 6Bender credits the key idea behind this analysis to Dan Flickinger (Bender, 2010). forms in the verbal cluster, as illustrated in (2) for Dutch: (2) Gezien Seen zou should de the man man haar her kunnen can hebben. have “The man should have been able to see her.” The problem is that hebben “have” cannot combine with gezien “seen”, because they are separated by the head of the clause. Because the verb hebben cannot combine with its complement, it cannot raise its complement’s arguments either: the auxiliary+verb analysis only permits raising when auxiliary and complement combine. This shortcoming is no reason to immediately dismiss the proposal. Structures such as (2) are extremely rare. The difference in coverage of a parser that can and a parser that cannot handle such structures is likely to be tiny, if present at all, nor is it vital for a sentence generator to be able to produce them. However, a correct grammar should be able to analyze and produce all grammatical structures. I implemented an additional version of the auxiliary+verb construction using two rather complex rules that capture examples such as (2). Because the structure in (2) also presented difficulties for the argument-composition analysis in Dutch, I tested both of the analyses with and without the inclusion of these structures. In the ideal case, the full coverage version will remain efficient enough as the grammar grows. But if this turns out not to be the case, the decision can be made to exclude the additional rule from the grammar or to use it as a robustness rule that is only called when regular rules fail. Given the metagrammar engineering approach, it will be straightforward to decide at a later point to exclude the special rule, if corpus studies reveal this is favourable. 5 Grammars and evaluation 5.1 Experimental set-up As described above, the Germanic metagrammar is a branch of the customization system. As such, it takes a choices file as input to create a grammar. The basic choices files for Dutch and German were created through the LinGO Grammar Matrix web inter1070 Complete Set Reduced Set Positive Total Positive Total Av. s s s s w/s Du 177 14654 138 14591 6.61 Fl 195 14654 156 14606 6.61 Ge 116 6926 84 6914 6.65 Table 3: Number of test examples (s) used in evaluation and average words per sentence (w/s) face.7 The choices files defined artificial grammars with a dummy vocabulary. The system can produce real fragments of the languages, but strings representing syntactic properties through dummy vocabulary were used to give better control over ambiguity facilitating the evaluation of coverage and overgeneration of the grammars. The grammars have a lexicon of 9-10 unambiguous dummy words. The created choices files were extended offline to define those properties that the Germanic metagrammar captures, but are not incorporated in the Matrix customization system. This included word order of the auxiliary and complement, fixed or free argument order, influence of inflection on word order, a more elaborate case hierarchy, ditransitive verbs, and the choice of auxiliary/verb analysis. Four choices files with different combinations of analyses were created for each language, resulting in 12 choices files in total. A basic test suite was developed that covers intransitive, transitive and ditransitive main clauses with up to three auxiliaries. The German set was based on a description provided by Kathol (2000), Dutch and Flemish were based on Haeseryn (1997). For each verb and auxiliary combination, all permissible word orders were defined based on descriptive resources. In order to make sure the grammars do not reveal unexpected forms of overgeneration, all possible ungrammatical orders were automatically generated. Table 3 provides the sizes of the test suites. Each language has both a complete set for the 6 grammars that provide full coverage, and a reduced set for the 6 grammars that can not handle split verbal clusters (see Section 4.3 for the motivation to test grammars that do not have full coverage). 7http://www.delph-in.net/matrix/ customize/ Each grammar was created using the metagrammar, ensuring that all components except the competing analyses were held constant among compared grammars. The [incr tsdb()] competence and performance profiling environment (Oepen, 2001) was used in combination with the LKB to evaluate parsing performance of the individual grammars on the test suites. For each grammar, the number of required parsing tasks, memory (space) and CPU time per sentence, as well as the number of passive edges created during an average parse were compared. Performance on language generation was evaluated using the LKB. 5.2 Parsing results Table 4 presents the results from the parsing experiment. Note that all directly compared grammars have the same empirical coverage (100% coverage and 0% overgeneration on the phenomena included in the test suites). The comparison therefore addresses the effect on efficiency of the alternative analyses. Three tests per grammar were carried out: one on positive data, one on negative data and one on the complete dataset. Results were similar for all three sets, with slightly larger differences in efficiency for negative examples. For reasons of space, only the results on positive examples are presented, which are more relevant for most applications involving parsing. The results show that the auxiliary+verb (aux+v) leads to a more efficient grammar according to all measures used. There is an average reduction of 73.2% in performed tasks, 56.3% in produced passive edges and 32.9% in memory when parsing grammatical examples using the auxiliary+verb structure compared to argument-composition. CPU-time per sentence also improved significantly, but, due to the short average sentence length (5-10 words) the value is too small for exact comparison with [incr tsdb()]. 5.3 Sentence generation evaluation The complete coverage versions of Dutch and German were used to create the exhaustive set of sentences with an intransitive, transitive and ditransitive verb combined with none, one or two auxiliaries but rapidly loses ground when one or more auxiliaries8 8All auxiliaries in the grammars contribute an ep. 1071 Average Performed Tasks Compl. Cov. Gram. No Split Cl. Gram. arg-comp aux+v arg-comp aux+v Du 524 149 480 134 Fl 529 150 483 137 Ge 684 148 486 136 Average Created Edges Compl. Cov. Gram. No Split Cl. Gram. arg-comp aux+v arg-comp aux+v Du 58 25 52 25 Fl 58 26 52 25 Ge 67 23 52 24 Average Memory Use (kb) Compl. Cov. Gram. No Split Cl. Gram. arg-comp aux+v arg-comp aux+v Du 9691 6692 8944 6455 Fl 9716 6717 8989 6504 Ge 10289 5675 8315 5468 Average CPU Time (s) Compl. Cov. Gram. No Split Cl. Gram. arg-comp aux+v arg-comp aux+v Du 0.04 0.02 0.03 0.01 Fl 0.04 0.02 0.03 0.01 Ge 0.06 0.01 0.04 0.01 Table 4: Parsing results positive examples from a total of 18 MRSs. The input MRSs were obtained by parsing a sentence with canonical word order. Both versions provide the same set of sentences as output, confirming their identical empirical coverage. Table 5 presents the number of edges required by the generator to produce the full set of generated sentences from a given MRS. The cells with no number represent conditions under which the LKB generator reaches the maximum limit of edges, set at 40,000, without completing its exhaustive search. The grammar using argument-composition is slightly more efficient when there are no auxiliaries, are added, in particular when sentence length increases: For ditransitive verbs (dv), the Dutch argument-composition grammar maxes out the 40,000 edge limit with two auxiliaries, whereas the auxiliary+verb grammar creates 910 edges, a manageable number. Due to the more liberal order of arguments, results are even worse for German: the argument-composition grammar reaches its limit with the first auxiliary for ditransitive verbs. These results indicate that the auxiliary+verb analysis is Required edges Du No Aux 1 Aux 2 Aux arg-c aux+v arg-c aux+v arg-c aux+v iv 54 57 221 99 792 248 tv 124 141 1311 211 7455 500 dv 212 230 14968 378 – 910 Ge No Aux 1 Aux 2 Aux arg-c aux+v arg-c aux+v arg-c aux+v iv 54 57 295 84 1082 165 tv 130 142 4001 212 18473 422 dv 306 351 – 608 – 1379 Table 5: Performance on Sentence Generation strongly preferable where natural language generation is concerned. 5.4 In summary The results of the experiment presented above show that avoiding underspecified subcategorization lists, as found in the standard argument-composition analysis, significantly increases the efficiency of the grammar for both parsing and generation. On average, they show a reduction of 73.2% in performed tasks, 56.3% in produced passive edges and 32.9% in memory for parsing. In generation experiments, results are even more impressive: the reduction of edges for German sentences with one auxiliary and a ditransitve verb is at least 98.5%. These results show that the auxiliary+verb alternative should be considered seriously as an alternative to the HPSG standard analysis of argument-composition, though further investigation in a larger context is needed before final conclusions can be drawn. Future work will focus on increasing the coverage of the grammars, as well as the number of alternative options explored. In particular, both approaches for auxiliaries should be compared using alternative analyses for verb-second word order found in other HPSG-based grammars, such as the GG (M¨uller and Kasper, 2000; Crysmann, 2005), Grammix (M¨uller, 2009; M¨uller, 2008) and Cheetah (Cramer and Zhang, 2009) for German, and Alpino (Bouma et al., 2001) for Dutch. These grammars may use approaches that somewhat reduce the problem of argument-composition, leading to less significant differences between the auxiliary+verb and argument-composition analyses. On the other hand, planned extensions that cover modification and sub1072 ordinate clauses will increase local ambiguities. The advantage of the auxiliary+verb analysis is likely to become more important as a result. In addition to providing a clearer picture of auxiliary structures, these extensions will also lead to a better insight into efforts involved in using grammar generation to explore alternative versions of a grammar over time. In particular, it should provide an indication of the feasibility of maintaining a higher number of competing analyses as the grammar grows. After providing background on related metagrammar projects and their goals, I will elaborate on the importance of systematic exploration of grammars in the discussion. 6 Related work Metagrammars (or grammar generators) have been established in the field for over a decade. This section provides an overview of the goals and set-up of some of the most notable projects. The MetaGrammar project (Candito, 1998; de la Clergerie, 2005; Kinyon et al., 2006) started as an effort to encode syntactic knowledge in an abstract class hierarchy. The hierarchy can contain cross-linguistically invariable properties and syntactic properties that hold across frameworks (Kinyon et al., 2006). The factorized descriptions of MetaGrammar support Tree-Adjoining Grammars (Joshi et al., 1975, TAG) as well as Lexical Functional Grammars (Bresnan, 2001, LFG). The eXtensible MetaGrammar (Crabb´e, 2005, XMG) defines its MetaGrammar as classes that are part of a multiple inheritance hierarchy. Kinyon et al. (Kinyon et al., 2006) use XMG to perform a cross-linguistic comparison of verb-second structures. Their study focuses on code-sharing between the languages, but does not address the problem of competing analyses investigated in this paper. The GF Resource Grammar Library (Ranta, 2009) is a multi-lingual linguistic resource that contains a set of syntactic analyses implemented in GF (Grammatical Framework). The purpose of the library is to allow engineers working on NLP applications to write simple grammar rules that can call more complex syntactic implementations from the grammar library. The grammar library is written by researchers with linguistic expertise. It makes extensive use of code sharing: general categories and constructions that are used by all languages are implemented in a core syntax grammar. Each language9 has its own lexicon and morphology, as well as a set of language specific syntactic structures. Code sharing also takes place between the subset of languages explored, in particular by means of common modules for Romance languages and for Scandanavian languages. PAWS creates PC-PATR (McConnel, 1995) grammars based on field linguists’ input. The main purpose of PAWS lies in descriptive grammar writing and “computer-assisted related language adaptation”, where the grammar is used to map words from a text in a source language to a target language. PAWS differs from the other projects discussed here, because grammar engineering or syntactic research are not the main focus of the project. The LinGO Grammar Matrix, described in Section 2.1, is most closely related to the work presented in this paper. Like the other projects reviewed here, the Grammar Matrix does not offer alternative analyses for the same phenomenon. Moreover, starter grammars created by the Grammar Matrix are developed manually and individually after their creation. The approach taken in this paper differs from the original goal of the Grammar Matrix in that it continues the development of new grammars within the system, introducing a novel application for metagrammars. By using a metagrammar to store alternative analyses, grammars can be explored systematically over time. As such, the paper introduces a novel methodology for grammar engineering. The discussion and conclusion will elaborate on the advantages of the approach. 7 Discussion and conclusion 7.1 The challenge of choosing the right analysis As mentioned in the introduction, most phenomena in natural languages can be accounted for by more than one formal analysis. An engineer may implement alternative solutions and test the impact on the grammar concerning interaction with other phenomena (Bierwisch, 1963; M¨uller, 1999; Bender, 2008; Bender et al., 2011) and efficiency to decide between analyses. 9Ranta (Ranta, 2009) reports that GF is developed for fourteen languages, and more are under development. 1073 However, it is not feasible to carry out comparative tests by manually creating different versions of a grammar every time a decision about an implementation is made. Moreover, even if such a study were carried out at each stage, only the interaction with the current state of the grammar would be tested. This has two undesirable consequences. First, options may be rejected that would have worked perfectly well if different decisions had been made in the past. Second, because each decision is only based on the current state of the grammar, the resulting grammar is partially (or even largely) a product of the order in which phenomena are treated.10 For grammar engineers with practical applications in mind, this is undesirable because the resulting grammar may end up far from optimal. For grammar writers that use engineering to find valid linguistic analyses, the problem is even more serious: if there is a truth in a declarative grammar, surely, this should not depend on the order in which phenomena are treated. 7.2 Metagrammar engineering This paper proposes to systematically explore analyses throughout the development of a grammar by writing a metagrammar (or grammar generator), rather than directly implementing the grammar. A metagrammar can contain several different analyses for the same phenomenon. After adding a new phenomenon to the metagrammar, the engineer can automatically generate versions of the grammar containing different combinations of previous analyses. As a result, the engineer can not only systematically explore how alternative analyses interact with the current grammar, but also continue to explore interactions with phenomena added in the future. Especially for alternative approaches to basic properties of the language, such as the auxiliary-verb structures examined in this study, parallel analyses may prevent the cumbersome scenario of changing a deeply embedded property of a large grammar. An additional advantage is that the engineer can use the methodology to make different versions of the grammar depending on its intended application. 10It is, of course, possible to go back and change old analyses based on new evidence. In practice, the large effort involved will only be undertaken if the advantages are apparent beforehand. For instance, it is possible to develop a highly restricted version for grammar checking that provides detailed feedback on detected errors (Bender et al., 2004), next to a version with fewer constraints to parse open text. As far as finding optimal solutions is concerned, it must be noted that this approach does not guarantee a perfect result, partially because there is no guarantee the grammar engineer will think of the perfect solution for each phenomenon, but mainly because it is not maintainable to implement all possible alternatives for each phenomenon and make them interact correctly with all other variations in the grammar. The grammar engineer still needs to decide which alternatives are the most promising and therefore the most important to implement and maintain. The resulting grammar therefore partially remains a result of the order in which phenomena are implemented. Nevertheless, the grammar engineer can keep and try out solutions in parallel for a longer time, increasing the possibility of exploring more alternative versions of the grammar. These additional investigations allow for better informed decisions to stop exploring certain analyses. In addition, by breaking up analyses into possible alternatives, chances are that the resulting metagrammar will be more modular than a directly written grammar would have been, which facilitates exploring alternatives further. In sum, even though metagrammar engineering does not completely solve the challenge of complete explorations of a grammar’s possibilities, it does facilitate this process so that finding optimal solutions becomes more likely, leading to better supported choices among alternatives and a more scientific approach to grammar development. Acknowledgments. The work described in this paper has been supported by the project TAKE (Technologies for Advanced Knowledge Extraction), funded under contract 01IW08003 by the German Federal Ministry of Education and Research. Emily M. Bender, Laurie Poulson, Christoph Zwirello, Bart Cramer, Kim Gerdes and three anonymous reviewers provided valuable feedback that resulted in significant improvement of the paper. Naturally, all remaining errors are my own. 1074 References Emily M. Bender, Dan Flickinger, and Stephan Oepen. 2002. The grammar matrix: An open-source starterkit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. In John Carroll, Nelleke Oostdijk, and Richard Sutcliffe, editors, Proceedings of the Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational Linguistics, pages 8– 14, Taipei, Taiwan. Emily M. Bender, Dan Flickinger, Stephan Oepen, Annemarie Walsh, and Tim Baldwin. 2004. Arboretum: Using a precision grammar for grammar checking in call. In Proceedings of the InSTIL/ICAL Symposium: NLP and Speech Technologies in Advance Language Learning Systems, Venice, Italy. Emily M. Bender, Scott Drellishak, Antske Fokkens, Laurie Poulson, and Safiyyah Saleem. 2010. Grammar customization. Research on Language & Computation, 8(1):23–72. Emily M. Bender, Dan Flickinger, and Stephan Oepen. 2011. Grammar engineering and linguistic hypothesis testing. In Emily M. Bender and Jennifer E. Arnold, editors, Language from a Cognitive Perspective: Grammar, Usage and Processing, pages 5–29. Stanford: CSLI Publications, Palo Alto, USA. Emily M. Bender. 2008. Grammar engineering for linguistic hypothesis testing. In Nicholas Gaylord, Alexis Palmer, and Elias Ponvert, editors, Proceedings of the Texas Linguistics Society X Conference: Computational Linguistics for Less-Studied Languages, pages 16–36, Stanford. CSLI Publications. Emily M. Bender. 2010. Reweaving a grammar for Wambaya: A case study in grammar engineering for linguistic hypothesis testing. Linguistic Issues in Language Technology, 3(3):1–34. Manfred Bierwisch. 1963. Grammatik des deutschen Verbs, volume II of Studia Grammatica. Akademie Verlag. Gosse Bouma, Gertjan van Noord, and Robert Malouf. 2001. Alpino: Wide coverage computational analysis of Dutch. In Computational Linguistics in the Netherlands CLIN 2000. Joan Bresnan. 2001. Lexical Functional Syntax. Blackwell Publishers, Oxford. Marie-Helene Candito. 1998. Building parallel LTAG for French and Italian. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 211– 217, Montreal, Quebec, Canada. Association for Computational Linguistics. John Carroll and Stephan Oepen. 2005. High efficiency realization for a wide-coverage unification grammar. In IJCNLP, Jeju Island. Springer-Verlag LNCS. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag. 2005. Minimal recursion semantics. an introduction. Journal of Research on Language and Computation, 3(2–3):281 – 332. Ann Copestake. 2002. Implementing Typed Feature Structure Grammars. CSLI Publications, Stanford, CA. Benoˆıt Crabb´e. 2005. Repr´esentation modulaire et param´etrable de grammaires ´electroniques lexicalis´ees. Ph.D. thesis, Universit´e de Paris 7. Bart Cramer and Yi Zhang. 2009. Constructon of a German HPSG grammar from a detailed treebank. In Proceedings of the ACL 2009 Grammar Engineering across Frameworks workshop, pages 37–45, Singapore, Singapore. Berthold Crysmann. 2005. Relative clause extraposition in German: An efficient and portable implementation. Research on Language and Computation, 3(1):61–82. ´Eric Villemonte de la Clergerie. 2005. From metagrammars to factorized TAG/TIG parsers. In Proceedings of IWPT’05, pages 190–191. Erich Drach. 1937. Grundgedanken der Deutschen Satzlehre. Diesterweg, Frankfurt am Main, Germany. Oskar Erdmann. 1886. Grundz¨uge der deutschen Syntax nach ihrer geschichtlichen Entwicklung dargestellt. Erste Abteilung. Verlag der Cotta’schen Buchhandlung, Stuttgart, Germany. Walter Haeseryn. 1997. De gebruikswaarde van de ans voor tekstschrijvers, taaltrainers en taaladviseurs. Tekst[blad], 3. Erhard Hinrichs and Tsuneko Nakazawa. 1994. Linearizing auxs in German verbal complexes. In John Nerbonne, Klaus Netter, and Carl Pollard, editors, German in HPSG. CSLI, Stanford, USA. Aravind K. Joshi, Leon S. Levy, and Masako Takahashi. 1975. Tree adjunct grammars. Journal of Computer and System Sciences, 10(1):136–163. Andreas Kathol. 2000. Linear Syntax. Oxford Press. Alexandra Kinyon, Owen Rambow, Tatjana Scheffler, SinWon Yoon, and Aravind K. Joshi. 2006. The metagrammar goes multilingual: A cross-linguistic look at the V2-phenomenon. In Proceedings of the Eighth International Workshop on Tree Adjoining Grammar and Related Formalisms, pages 17–24, Sydney, Australia. Association for Computational Linguistics. Stephen McConnel. 1995. PC-PATR reference manual. Stefan M¨uller and Walter Kasper. 2000. HPSG analysis for German. In Wolfgang Wahlster, editor, Verbmobil: Foundations of Speech-to-Speech translation, pages 238 – 253, Berlin, Germany. Springer. 1075 Stefan M¨uller. 1999. Deutsche Syntax deklarativ. HeadDriven Phrase Structure Grammar f¨ur das Deutsche. Max Niemeyer Verlag, T¨ubingen. Stefan M¨uller. 2008. Depictive secondary predicates in german and english. In Christoph Schroeder, Gerd Hentschel, and Winfried Boeder, editors, Secondary Predicates in Eastern European Languages and Beyond, number 16 in Studia Slavica Oldenburgensia, pages 255–273, Oldenburg, Germany. BIS-Verlag. Stefan M¨uller. 2009. On predication. In Stefan M¨uller, editor, Proceedings of the 16th International Conference on Head-Driven Phrase Structure Grammar, Stanford, USA. CSLI Publications. Stephan Oepen. 2001. [incr tsdb()] — competence and performance laboratory. Technical report, DFKI, Saarbr¨ucken, Germany. Aarne Ranta. 2009. The GF resource grammar library. Linguistic Issues in Language Technology, 2(2). 1076
2011
107
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1077–1086, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Simple Unsupervised Grammar Induction from Raw Text with Cascaded Finite State Models Elias Ponvert, Jason Baldridge and Katrin Erk Department of Linguistics The University of Texas at Austin Austin, TX 78712 {ponvert,jbaldrid,katrin.erk}@mail.utexas.edu Abstract We consider a new subproblem of unsupervised parsing from raw text, unsupervised partial parsing—the unsupervised version of text chunking. We show that addressing this task directly, using probabilistic finite-state methods, produces better results than relying on the local predictions of a current best unsupervised parser, Seginer’s (2007) CCL. These finite-state models are combined in a cascade to produce more general (full-sentence) constituent structures; doing so outperforms CCL by a wide margin in unlabeled PARSEVAL scores for English, German and Chinese. Finally, we address the use of phrasal punctuation as a heuristic indicator of phrasal boundaries, both in our system and in CCL. 1 Introduction Unsupervised grammar induction has been an active area of research in computational linguistics for over twenty years (Lari and Young, 1990; Pereira and Schabes, 1992; Charniak, 1993). Recent work (Headden III et al., 2009; Cohen and Smith, 2009; H¨anig, 2010; Spitkovsky et al., 2010) has largely built on the dependency model with valence of Klein and Manning (2004), and is characterized by its reliance on gold-standard part-of-speech (POS) annotations: the models are trained on and evaluated using sequences of POS tags rather than raw tokens. This is also true for models which are not successors of Klein and Manning (Bod, 2006; H¨anig, 2010). An exception which learns from raw text and makes no use of POS tags is the common cover links parser (CCL, Seginer 2007). CCL established stateof-the-art results for unsupervised constituency parsing from raw text, and it is also incremental and extremely fast for both learning and parsing. Unfortunately, CCL is a non-probabilistic algorithm based on a complex set of inter-relating heuristics and a non-standard (though interesting) representation of constituent trees. This makes it hard to extend. Note that although Reichart and Rappoport (2010) improve on Seginer’s results, they do so by selecting training sets to best match the particular test sentences—CCL itself is used without modification. Ponvert et al. (2010) explore an alternative strategy of unsupervised partial parsing: directly predicting low-level constituents based solely on word co-occurrence frequencies. Essentially, this means segmenting raw text into multiword constituents. In that paper, we show—somewhat surprisingly—that CCL’s performance is mostly dependent on its effectiveness at identifying low-level constituents. In fact, simply extracting non-hierarchical multiword constituents from CCL’s output and putting a rightbranching structure over them actually works better than CCL’s own higher level predictions. This result suggests that improvements to low-level constituent prediction will ultimately lead to further gains in overall constituent parsing. Here, we present such an improvement by using probabilistic finite-state models for phrasal segmentation from raw text. The task for these models is chunking, so we evaluate performance on identification of multiword chunks of all constituent types as well as only noun phrases. Our unsupervised chunkers extend straightforwardly to a cascade that predicts higher levels of constituent structure, similar to the supervised approach of Brants (1999). This forms an overall unsupervised parsing system that outperforms CCL by a wide margin. 1077 Mrs. Ward for one was relieved                                  1 (a) Chunks: (Mrs. Ward), (for one), and (was relieved) All came from Cray Research                            (b) Only one chunk extracted: (Cray Research) Fig. 1: Examples of constituent chunks extracted from syntactic trees 2 Data We use the standard data sets for unsupervised constituency parsing research: for English, the Wall Street Journal subset of the Penn Treebank-3 (WSJ, Marcus et al. 1999); for German, the Negra corpus v2 (Krenn et al., 1998); for Chinese, the Penn Chinese Treebank v5.0 (CTB, Palmer et al., 2006). We lower-case text but otherwise do not alter the raw text of the corpus. Sentence segmentation and tokenization from the treebank is used. As in previous work, punctuation is not used for evaluation. In much unsupervised parsing work the test sentences are included in the training material. Like Cohen and Smith, Headden III et al., Spitkovsky et al., we depart from this experimental setup and keep the evaluation sets blind to the models during training. For English (WSJ) we use sections 00-22 for training, section 23 for test and we develop using section 24; for German (Negra) we use the first 18602 sentences for training, the last 1000 sentences for development and the penultimate 1000 sentences for testing; for Chinese (CTB) we adopt the data-split of Duan et al. (2007). 3 Tasks and Benchmark Evaluation. By unsupervised partial parsing, or simply unsupervised chunking, we mean the segmentation of raw text into (non-overlapping) multiword constituents. The models are intended to capture local constituent structure – the lower branches of a constituent tree. For this reason we evaluate WSJ Chunks 203K NPs 172K Chnk ∩NPs 161K Negra Chunks 59K NPs 33K Chnk ∩NPs 23K CTB Chunks 92K NPs 56K Chnk ∩NPs 43K Table 1: Constituent chunks and base NPs in the datasets. % constituents % words WSJ Chunks 32.9 57.7 NPs 27.9 53.1 Negra Chunks 45.4 53.6 NPs 25.5 42.4 CTB Chunks 32.5 55.4 NPs 19.8 42.9 Table 2: Percentage of gold standard constituents and words under constituent chunks and base NPs. using what we call constituent chunks, the subset of gold standard constituents which are i) branching (multiword) but ii) non-hierarchical (do not contain subconstituents). We also evaluate our models based on their performance at identifying base noun phrases, NPs that do not contain nested NPs. Examples of constituent chunks extracted from treebank constituent trees are in Fig. 1. In English newspaper text, constituent chunks largely correspond with base NPs, but this is less the case with Chinese and German. Moreover, the relationship between NPs and constituent chunks is not a subset relation: some base NPs do have internal constituent structure. The numbers of constituent chunks and NPs for the training datasets are in Table 1. The percentage of constituents in these datasets which fall under these definitions, and the percentage of words under these constituents, are in Table 2. For parsing, the standard unsupervised parsing metric is unlabeled PARSEVAL. It measures precision and recall on constituents produced by a parser as compared to gold standard constituents. CCL benchmark. We use Seginer’s CCL as a benchmark for several reasons. First, there is a free/open-source implementation facilitating exper1078 imental replication and comparison.1 More importantly, until recently it was the only unsupervised raw text constituent parser to produce results competitive with systems which use gold POS tags (Klein and Manning, 2002; Klein and Manning, 2004; Bod, 2006) – and the recent improved raw-text parsing results of Reichart and Rappoport (2010) make direct use of CCL without modification. There are other raw-text parsing systems of note, EMILE (Adriaans et al., 2000), ABL (van Zaanen, 2000) and ADIOS (Solan et al., 2005); however, there is little consistent treebank-based evaluation of these models. One study by Cramer (2007) found that none of the three performs particularly well under treebank evaluation. Finally, CCL outperforms most published POS-based models when those models are trained on unsupervised word classes rather than gold POS tags. The only exception we are aware of is H¨anig’s (2010) unsuParse+, which outperforms CCL on Negra, though this is shown only for sentences with ten or fewer words. Phrasal punctuation. Though punctuation is usually entirely ignored in unsupervised parsing research, Seginer (2007) departs from this in one key aspect: the use of phrasal punctuation – punctuation symbols that often mark phrasal boundaries within a sentence. These are used in two ways: i) they impose a hard constraint on constituent spans, in that no constituent (other than sentence root) may extend over a punctuation symbol, and ii) they contribute to the model, specifically in terms of the statistics of words seen adjacent to a phrasal boundary. We follow this convention and use the following set: . ? ! ; , -◦ ￿ The last two are ideographic full-stop and comma.2 4 Unsupervised partial parsing We learn partial parsers as constrained sequence models over tags encoding local constituent structure (Ramshaw and Marcus, 1995). A simple tagset is unlabeled BIO, which is familiar from supervised chunking and named-entity recognition: the tag B 1http://www.seggu.net/ccl 2This set is essentially that of Seginer (2007). While it is clear from our analysis of CCL that it does make use of phrasal punctuation in Chinese, we are not certain whether ideographic comma is included. denotes the beginning of a chunk, I denotes membership in a chunk and O denotes exclusion from any chunk. In addition we use the tag STOP for sentence boundaries and phrasal punctuation. HMMs and PRLGs. The models we use for unsupervised partial parsing are hidden Markov models, and a generalization we refer to as probabilistic right linear grammars (PRLGs). An HMM models a sequence of observed states (words) x = {x1, x2, . . . , xN} and a corresponding set of hidden states y = {y1, y2, . . . , yN}. HMMs may be thought of as a special case of probabilistic contextfree grammars, where the non-terminal symbols are the hidden state space, terminals are the observed states and rules are of the form NONTERM → TERM NONTERM (assuming y1 and yN are fixed and given). So, the emission and transition emanating from yn would be characterized as a PCFG rule yn →xn yn+1. HMMs factor rule probabilities into emission and transition probabilities: P(yn →xn yn+1) = P(xn, yn+1|yn) ≈P(xn|yn) P(yn+1|yn). However, without making this independence assumption, we can model right linear rules directly: P(xn, yn+1|yn) = P(xn|yn, yn+1) P(yn+1|yn). So, when we condition emission probabilities on both the current state yn and the next state yn+1, we have an exact model. This direct modeling of the right linear grammar rule yn →xn yn+1 is what we call a probabilistic right-linear grammar. To be clear, a PRLG is just an HMM without the independence of emissions and transitions. See Smith and Johnson (2007) for a discussion, where they refer to PRLGs as Mealy HMMs. We use expectation maximization to estimate model parameters. For the E step, the forwardbackward algorithm (Rabiner, 1989) works identically for the HMM and PRLG. For the M step, we use maximum likelihood estimation with additive smoothing on the emissions probabilities. So, for the HMM and PRLG models respectively, for words 1079 STOP B O I 1 Fig. 2: Possible tag transitions as a state diagram. STOP B I O STOP .33 .33 .33 B 1 I .25 .25 .25 .25 O .33 .33 .33 Fig. 3: Uniform initialization of transition probabilities subject to the constraints in Fig. 2: rows correspond to antecedent state, columns to following state. w and tags s, t: ˆP(w|t) = C(t, w) + λ C(t) + λV ˆP(w|s, t) = C(t, w, s) + λ C(t, s) + λV where C are the soft counts of emissions C(t, w), rules C(t, w, s) = C(t →w s), tags C(t) and transitions C(t, s) calculated during the E step; V is the number of terms w, and λ is a smoothing parameter. We fix λ = .1 for all experiments; more sophisticated smoothing could avoid dependence on λ. We do not smooth transition probabilities (so ˆP(s|t) = C(t, s)/C(t)) for two reasons. First, with four tags, there is no data-sparsity concern with respect to transitions. Second, the nature of the task imposes certain constraints on transition probabilities: because we are only interested in multiword chunks, we expressly do not want to generate a B following a B – in other words P(B|B) = 0. These constraints boil down to the observation that the B and I states will only be seen in BII∗sequences. This may be expressed via the state transition diagram in Fig. 2. The constraints of also dictate the initial model input to the EM process. We use uniform probability distributions subject to the constraints of Fig. 2. So, initial model transition probabilities are given in Fig. 3. In EM, if a parameter is equal to zero, subsequent iterations of the EM process will not “unset” this parameter; thus, this form of initialization is a simple way of encoding constraints on model parameters. We also experimented with random initial models (subject to the constraints in Fig. 2). Uniform initialization usually works slightly better; also, uniform initialization does not require multiple runs of each experiment, as random initialization does. Motivating the HMM and PRLG. This approach – encoding a chunking problem as a tagging problem and learning to tag with HMMs – goes back to Ramshaw and Marcus (1995). For unsupervised learning, the expectation is that the model will learn to generalize on phrasal boundaries. That is, the models will learn to associate terms like the and a, which often occur at the beginnings of sentences and rarely at the end, with the tag B, which cannot occur at the end of a sentence. Likewise common nouns like company or asset, which frequently occur at the ends of sentences, but rarely at the beginning, will come to be associated with the I tag, which cannot occur at the beginning. The basic motivation for the PRLG is the assumption that information is lost due to the independence assumption characteristic of the HMM. With so few states, it is feasible to experiment with the more finegrained PRLG model. Evaluation. Using the low-level predictions of CCL as as benchmark, we evaluate the HMM and PRLG chunkers on the tasks of constituent chunk and base NP identification. Models were initialized uniformly as illustrated in Fig. 3. Sequence models learn via EM. We report accuracy only after convergence, that is after the change in full dataset perplexity (log inverse probability) is less than %.01 between iterations. Precision, recall and F-score are reported for full constituent identification – brackets which do not match the gold standard exactly are false positives. Model performance results on held-out test datasets are reported in Table 3. ‘CCL’ refers to the lowest-level constituents extracted from full CCL output, as a benchmark chunker. The sequence models outperform the CCL benchmark at both tasks and on all three datasets. In most cases, the PRLG sequence model performs better than the HMM; the exception is CTB, where the PRLG model is behind the HMM in evaluation, as well as behind CCL. As the lowest-level constituents of CCL were not specifically designed to describe chunks, we also 1080 English / WSJ German / Negra Chinese / CTB Task Model Prec Rec F Prec Rec F Prec Rec F Chunking CCL 57.5 53.5 55.4 28.4 29.6 29.0 23.5 23.9 23.7 HMM 53.8 62.2 57.7 35.0 37.7 36.3 37.4 41.3 39.3 PRLG 76.2 63.9 69.5 39.6 47.8 43.3 23.0 18.3 20.3 NP CCL 46.2 51.1 48.5 15.6 29.2 20.3 10.4 17.3 13.0 HMM 47.7 65.6 55.2 23.8 46.2 31.4 17.0 30.8 21.9 PRLG 76.8 76.7 76.7 24.6 53.4 33.6 21.9 28.5 24.8 Table 3: Unsupervised chunking results for local constituent structure identification and NP chunking on held-out test sets. CCL refers to the lowest constituents extracted from CCL output. WSJ Negra CTB Chunking 57.8 36.0 25.5 NPs 57.8 38.8 23.2 Table 4: Recall of CCL on the chunking tasks. checked the recall of all brackets generated by CCL against gold-standard constituent chunks. The results are given in Table 4. Even compared to this, the sequence models’ recall is almost always higher. The sequence models, as well as the CCL benchmark, show relatively low precision on the Negra corpus. One possible reason for this lies in the design decision of Negra to use relatively flat tree structures. As a result, many structures that in other treebanks would be prepositional phrases with embedded noun phrases – and thus non-local constituents – are flat prepositional phrases here. Examples include “auf die Wiesbadener Staatsanwaelte” (on Wiesbaden’s district attorneys) and “in Hannovers Nachbarstadt” (in Hannover’s neighbor city). In fact, in Negra, the sequence model chunkers often find NPs embedded in PPs, which are not annotated as such. For instance, in the PP “hinter den Kulissen” (behind the scenes), both the PRLG and HMM chunkers identify the internal NP, though this is not identified in Negra and thus considered a false positive. The fact that the HMM and PRLG have higher recall on NP identification on Negra than precision is further evidence towards this. Comparing the HMM and PRLG. To outline some of the factors differentiating the HMM and PRLG, we focus on NP identification in WSJ. The PRLG has higher precision than the HMM, while the two models are closer in recall. Comparing the predictions directly, the two models ofPOS Sequence # of errors TO VB 673 NNP NNP 450 MD VB 407 DT JJ 368 DT NN 280 Table 5: Top 5 POS sequences of the false positives predicted by the HMM. ten have the same correct predictions and often miss the same gold standard constituents. The improved results of the PRLG are based mostly on the fewer overall brackets predicted, and thus fewer false positives: for WSJ the PRLG incorrectly predicts 2241 NP constituents compared to 6949 for the HMM. Table 5 illustrates the top 5 POS sequences of the false positives predicted by the HMM.3 (Recall that we use gold standard POS only for post-experiment results analysis—the model itself does not have access to them.) By contrast, the sequence representing the largest class of errors of the PRLG is DT NN, with 165 errors – this sequence represents the largest class of predictions for both models. Two of the top classes of errors, MD VB and TO VB, represent verb phrase constituents, which are often predicted by the HMM chunker, but not by the PRLG. The class represented by NNP NNP corresponds with the tendency of the HMM chunker to split long proper names: for example, it systematically splits new york stock exchange into two chunks, (new york) (stock exchange), whereas the PRLG chunker predicts a single four-word chunk. The most interesting class is DT JJ, which represents the difficulty the HMM chunker has at dis3For the Penn Treebank tagset, see Marcus et al. (1993). 1081 1 Start with raw text: there is no asbestos in our products now 2 Apply chunking model: there (is no asbestos) in (our products) now 3 Create pseudowords: there is in our now 4 Apply chunking model (and repeat 1–4 etc.): (there is ) (in our ) now 5 Unwind and create a tree: there is no asbestos in our products now 1 Fig. 4: Cascaded chunking illustrated. Pseudowords are indicated with boxes. tinguishing determiner-adjective from determinernoun pairs. The PRLG chunker systematically gets DT JJ NN trigrams as chunks. The greater context provided by right branching rules allows the model to explicitly estimate separate probabilities for P(I →recent I) versus P(I →recent O). That is, recent within a chunk versus ending a chunk. Bigrams like the acquisition allow the model to learn rules P(B →the I) and P(I →acquisition O). So, the PRLG is better able to correctly pick out the trigram chunk (the recent acquisition). 5 Constituent parsing with a cascade of chunkers We use cascades of chunkers for full constituent parsing, building hierarchical constituents bottomup. After chunking is performed, all multiword constituents are collapsed and represented by a single pseudoword. We use an extremely simple, but effective, way to create pseudoword for a chunk: pick the term in the chunk with the highest corpus frequency, and mark it as a pseudoword. The sentence is now a string of symbols (normal words and pseudowords), to which a subsequent unsupervised chunking model is applied. This process is illustrated in Fig. 4. Each chunker in the cascade chunks the raw text, then regenerates the dataset replacing chunks with pseudowords; this process is iterated until no new chunks are found. The separate chunkers in the casText : Mr. Vinken is chairman of Elsevier N.V. Level 1 : Mr. Vinken is chairman of Elsevier N.V. 1 Level 2 : Mr. Vinken is chairman of Elsevier N.V. 1 Level 3 : Mr. Vinken is chairman of Elsevier N.V. 1 Fig. 5: PRLG cascaded chunker output. NPs PPs Lev 1 Lev 2 Lev 1 Lev 2 WSJ HMM 66.5 68.1 20.6 70.2 PRLG 77.5 78.3 9.1 77.6 Negra HMM 54.7 62.3 24.8 48.1 PRLG 61.6 65.2 40.3 44.0 CTB HMM 33.3 35.4 34.6 38.4 PRLG 30.9 33.6 31.6 47.1 Table 7: NP and PP recall at cascade levels 1 and 2. The level 1 NP numbers differ from the NP chunking numbers from Table 3 since they include root-level constituents which are often NPs. cade are referred to as levels. In our experiments the cascade process took a minimum of 5 levels, and a maximum of 7. All chunkers in the cascade have the same settings in terms of smoothing, the tagset and initialization. Evaluation. Table 6 gives the unlabeled PARSEVAL scores for CCL and the two finite-state models. PRLG achieves the highest F-score for all datasets, and does so by a wide margin for German and Chinese. CCL does achieve higher recall for English. While the first level of constituent analysis has high precision and recall on NPs, the second level often does well finding prepositional phrases (PPs), especially in WSJ; see Table 7. This is illustrated in Fig. 5. This example also illustrates a PP attachment error, which are a common problem for these models. We also evaluate using short – 10-word or less – sentences. That said, we maintain the training/test split from before. Also, making use of the open 1082 Parsing English / WSJ German / Negra Chinese / CTB Model Prec Rec F Prec Rec F Prec Rec F CCL 53.6 50.0 51.7 33.4 32.6 33.0 37.0 21.6 27.3 HMM 48.2 43.6 45.8 30.8 50.3 38.2 43.0 29.8 35.2 PRLG 60.0 49.4 54.2 38.8 47.4 42.7 50.4 32.8 39.8 Table 6: Unlabeled PARSEVAL scores for cascaded models. source implementation by F. Luque,4 we compare on WSJ and Negra to the constituent context model (CCM) of Klein and Manning (2002). CCM learns to predict a set of brackets over a string (in practice, a string of POS tags) by jointly estimating constituent and distituent strings and contexts using an iterative EM-like procedure (though, as noted by Smith and Eisner (2004), CCM is deficient as a generative model). Note that this comparison is methodologically problematic in two respects. On the one hand, CCM is evaluated using gold standard POS sequences as input, so it receives a major source of supervision not available to the other models. On the other hand, the other models use punctuation as an indicator of constituent boundaries, but all punctuation is dropped from the input to CCM. Also, note that CCM performs better when trained on short sentences, so here CCM is trained only on the 10-wordor-less subsets of the training datasets.5 The results from the cascaded PRLG chunker are near or better than the best performance by CCL or CCM in these experiments. These and the full-length parsing results suggest that the cascaded chunker strategy generalizes better to longer sentences than does CCL. CCM does very poorly on longer sentences, but does not have the benefit of using punctuation, as do the raw text models; unfortunately, further exploration of this trade-off is beyond the scope of this paper. Finally, note that CCM has higher recall, and lower precision, generally, than the raw text models. This is due, in part, to the chart structure used by CCM in the calculation of constituent and distituent probabilities: as in CKY parsing, the chart structure entails the trees predicted will be binary-branching. CCL and the cascaded models can predict higher-branching constituent structures, 4http://www.cs.famaf.unc.edu.ar/ ˜francolq/en/proyectos/dmvccm/ 5This setup is the same as Seginer’s (2007), except the train/test split. Prec Rec F WSJ CCM 62.4 81.4 70.7 CCL 71.2 73.1 72.1 HMM 64.4 64.7 64.6 PRLG 74.6 66.7 70.5 Negra CCM 52.4 83.4 64.4 CCL 52.9 54.0 53.0 HMM 47.7 72.0 57.4 PRLG 56.3 72.1 63.2 CTB CCL 54.4 44.3 48.8 HMM 55.8 53.1 54.4 PRLG 62.7 56.9 59.6 Table 8: Evaluation on 10-word-or-less sentences. CCM scores are italicized as a reminder that CCM uses goldstandard POS sequences as input, so its results are not strictly comparable to the others. so fewer constituents are predicted overall. 6 Phrasal punctuation revisited Up to this point, the proposed models for chunking and parsing use phrasal punctuation as a phrasal separator, like CCL. We now consider how well these models perform in absence of this constraint. Table 9a provides comparison of the sequence models’ performance on the constituent chunking task without using phrasal punctuation in training and evaluation. The table shows absolute improvement (+) or decline (−) in precision and recall when phrasal punctuation is removed from the data. The punctuation constraint seems to help the chunkers some, but not very much; ignoring punctuation seems to improve chunker results for the HMM on Chinese. Overall, the effect of phrasal punctuation on the chunker models’ performance is not clear. The results for cascaded parsing differ strongly from those for chunking, as Table 9b indicates. Using phrasal punctuation to constrain bracket prediction has a larger impact on cascaded parsing re1083 0 20 40 60 2 2.5 3 3.5 EM Iterations Length a) Average Predicted Constituent Length Actual average chunk length 1 0 20 40 60 20 30 40 50 EM Iterations Precision W/ Punctuation No Punctuation b) Chunking Precision 1 0 20 40 60 20 30 40 50 EM Iterations Precision c) Precision at All Brackets 1 Fig. 6: Behavior of the PRLG model on CTB over the course of EM. WSJ Negra CTB Prec Rec Prec Rec Prec Rec HMM −5.8 −9.8 −0.1 −0.4 +0.7 +4.9 PRLG −2.5 −2.1 −2.1 −2.1 −7.0 +1.2 a) Constituent Chunking WSJ Negra CTB Prec Rec Prec Rec Prec Rec CCL −14.1 −13.5 −10.7 −4.6 −11.6 −6.0 HMM −7.8 −8.6 −2.8 +1.7 −13.4 −1.2 PRLG −10.1 −7.2 −4.0 −4.5 −22.0 −11.8 b) (Cascade) Parsing Table 9: Effects of dropping phrasal punctuation in unsupervised chunking and parsing evaluations relative to Tables 3 and 6. sults almost across the board. This is not surprising: while performing unsupervised partial parsing from raw text, the sequence models learn two general patterns: i) they learn to chunk rare sequences, such as named entities, and ii) they learn to chunk high-frequency function words next to lower frequency content words, which often correlate with NPs headed by determiners, PPs headed by prepositions and VPs headed by auxiliaries. When these patterns are themselves replaced with pseudowords (see Fig. 4), the models have fewer natural cues to identify constituents. However, within the degrees of freedom allowed by punctuation constraints as described, the chunking models continue to find relatively good constituents. While CCL makes use of phrasal punctuation in previously reported results, the open source implementation allows it to be evaluated without this constraint. We did so, and report results in Table 9b. CCL is, in fact, very sensitive to phrasal punctuation. Comparing CCL to the cascaded chunkers when none of them use punctuation constraints, the cascaded chunkers (both HMMs and PRLGs) outperform CCL for each evaluation and dataset. For the CTB dataset, best chunking performance and cascaded parsing performance flips from the HMM to the PRLG. More to the point, the PRLG is actually with worst performing model at the constituent chunking task, but the best performing cascade parser; also, this model has the most serious degrade in performance when phrasal punctuation is dropped from input. To investigate, we track the performance of the chunkers on the development dataset over iterations of EM. This is illustrated in Fig. 6 with the PRLG model. First of all, Fig. 6a reveals the average length of the constituents predicted by the PRLG model increases over the course of EM. However, the average constituent chunk length is 2.22. So, the PRLG chunker is predicting constituents that are longer than the ones targeted in the constituent chunking task: regardless of whether they are legitimate constituents or not, often they will likely be counted as false positives in this evaluation. This is confirmed by observing the constituent chunking precision in Fig. 6b, which peaks when the average predicted constituent length is about the same the actual average length of those in the evaluation. The question, then, is whether the longer chunks predicted correspond to actual constituents or not. Fig. 6c shows that the PRLG, when constrained by phrasal punctuation, does continue to improve its constituent prediction accuracy over the course of EM. These correctly predicted constituents are not counted as such in the constituent chunking or base NP evaluations, but they factor directly into 1084 improved accuracy when this model is part of a cascade. 7 Related work Our task is the unsupervised analogue of chunking (Abney, 1991), popularized by the 1999 and 2000 Conference on Natural Language Learning shared tasks (Tjong et al., 2000). In fact, our models follow Ramshaw and Marcus (1995), treating structure prediction as sequence prediction using BIO tagging. In addition to Seginer’s CCL model, the unsupervised parsing model of Gao and Suzuki (2003) and Gao et al. (2004) also operates on raw text. Like us, their model gives special treatment to local constituents, using a language model to characterize phrases which are linked via a dependency model. Their output is not evaluated directly using treebanks, but rather applied to several information retrieval problems. In the supervised realm, Hollingshead et al. (2005) compare context-free parsers with finite-state partial parsing methods. They find that full parsing maintains a number of benefits, in spite of the greater training time required: they can train on less data more effectively than chunkers, and are more robust to shifts in textual domain. Brants (1999) reports a supervised cascaded chunking strategy for parsing which is strikingly similar to the methods proposed here. In both, Markov models are used in a cascade to predict hierarchical constituent structure; and in both, the parameters for the model at each level are estimated independently. There are major differences, though: the models here are learned from raw text without tree annotations, using EM to train parameters; Brants’ cascaded Markov models use supervised maximum likelihood estimation. Secondly, between the separate levels of the cascade, we collapse constituents into symbols which are treated as tokens in subsequent chunking levels; the Markov models in the higher cascade levels in Brants’ work actually emit constituent structure. A related approach is that of Schuler et al. (2010), who report a supervised hierarchical hidden Markov model which uses a right-corner transform. This allows the model to predict more complicated trees with fewer levels than in Brants’ work or this paper. 8 Conclusion In this paper we have introduced a new subproblem of unsupervised parsing: unsupervised partial parsing, or unsupervised chunking. We have proposed a model for unsupervised chunking from raw text that is based on standard probabilistic finitestate methods. This model produces better local constituent predictions than the current best unsupervised parser, CCL, across datasets in English, German, and Chinese. By extending these probabilistic finite-state methods in a cascade, we obtain a general unsupervised parsing model. This model outperforms CCL in PARSEVAL evaluation on English, German, and Chinese. Like CCL, our models operate from raw (albeit segmented) text, and like it our models decode very quickly; however, unlike CCL, our models are based on standard and well-understood computational linguistics technologies (hidden Markov models and related formalisms), and may benefit from new research into these core technologies. For instance, our models may be improved by the application of (unsupervised) discriminative learning techniques with features (Berg-Kirkpatrick et al., 2010); or by incorporating topic models and document information (Griffiths et al., 2005; Moon et al., 2010). UPPARSE, the software used for the experiments in this paper, is available under an open-source license to facilitate replication and extensions.6 Acknowledgments. This material is based upon work supported in part by the U. S. Army Research Laboratory and the U. S. Army Research Office under grant number W911NF-10-1-0533. Support for the first author was also provided by Mike Hogg Endowment Fellowship, the Office of Graduate Studies at The University of Texas at Austin. This paper benefited from discussion in the Natural Language Learning reading group at UT Austin, especially from Collin Bannard, David Beaver, Matthew Lease, Taesun Moon and Ray Mooney. We also thank the three anonymous reviewers for insightful questions and helpful comments. 6 http://elias.ponvert.net/upparse. 1085 References S. Abney. 1991. Parsing by chunks. In R. Berwick, S. Abney, and C. Tenny, editors, Principle-based Parsing. Kluwer. P. W. Adriaans, M. Trautwein, and M. Vervoort. 2000. Towards high speed grammar induction on large text corpora. In SOFSEM. T. Berg-Kirkpatrick, A. Bouchard-Cˆot´e, J. DeNero, and D. Klein. 2010. Painless unsupervised learning with features. In HLT-NAACL. R. Bod. 2006. Unsupervised parsing with U-DOP. In CoNLL. T. Brants. 1999. Cascaded markov models. In EACL. E. Charniak. 1993. Statistical Language Learning. MIT. S. B. Cohen and N. A. Smith. 2009. Shared logistic normal distributions for soft parameter tying in unsupervised grammar induction. In HLT-NAACL. B. Cramer. 2007. Limitations of current grammar induction algorithms. In ACL-SRW. X. Duan, J. Zhao, and B. Xu. 2007. Probabilistic models for action-based Chinese dependency parsing. In ECML/PKDD. J. Gao and H. Suzuki. 2003. Unsupervised learning of dependency structure for language modeling. In ACL. J. Gao, J.Y. Nie, G. Wu, and G. Cao. 2004. Dependence language model for information retrieval. In SIGIR. T. L. Griffiths, M. Steyvers, D. M. Blei, and J. M. Tenenbaum. 2005. Integrating topics and syntax. In NIPS. C. H¨anig. 2010. Improvements in unsupervised cooccurence based parsing. In CoNLL. W. P. Headden III, M. Johnson, and D. McClosky. 2009. Improving unsupervised dependency parsing with richer contexts and smoothing. In HLT-NAACL. K. Hollingshead, S. Fisher, and B. Roark. 2005. Comparing and combining finite-state and context-free parsers. In HLT-EMNLP. D. Klein and C. D. Manning. 2002. A generative constituent-context model for improved grammar induction. In ACL. D. Klein and C. D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In ACL. B. Krenn, T. Brants, W. Skut, and Hans Uszkoreit. 1998. A linguistically interpreted corpus of German newspaper text. In Proceedings of the ESSLLI Workshop on Recent Advances in Corpus Annotation. K. Lari and S. J. Young. 1990. The estimation of stochastic context-free grammars using the inside-outside algorithm. Computer Speech & Language, 4:35 – 56. M.P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Compuational Linguistics, pages 313–330. M.P. Marcus, B. Santorini, M.A. Marcinkiewicz, and A. Taylor, 1999. Treebank-3. LDC. T. Moon, J. Baldridge, and K. Erk. 2010. Crouching Dirichlet, hidden Markov model: Unsupervised POS tagging with context local tag generation. In EMNLP. M. Palmer, F. D. Chiou, N. Xue, and T. K. Lee, 2005. Chinese Treebank 5.0. LDC. F. Pereira and Y. Schabes. 1992. Inside-outside reestimation from paritally bracketed corpora. In ACL. E. Ponvert, J. Baldridge, and K. Erk. 2010. Simple unsupervised prediction of low-level constituents. In ICSC. L.R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE. L. A. Ramshaw and M. P. Marcus. 1995. Text chunking using transformation-based learning. In Proc. of Third Workshop on Very Large Corpora. R. Reichart and A. Rappoport. 2010. Improved fully unsupervised parsing with Zoomed Learning. In EMNLP. W. Schuler, S. AbdelRahman, T. Miller, and L. Schwartz. 2010. Broad-coverage parsing using human-like memory constraints. Compuational Linguistics, 3(1). Y. Seginer. 2007. Fast unsupervised incremental parsing. In ACL. N. A. Smith and J. Eisner. 2004. Annealing techniques for unsupervised statistical language learning. In ACL. N. A. Smith and M. Johnson. 2007. Weighted and probabilistic CFGs. Computational Lingusitics. Z. Solan, D. Horn, E. Ruppin, and S. Edelman. 2005. Unsupervised learning of natural languages. PNAS, 102. V. I. Spitkovsky, H. Alshawi, and D. Jurafsky. 2010. From baby steps to leapfrog: How “less is more” in unsupervised dependency parsing. In NAACL-HLT. E. F. Tjong, K. Sang, and S. Buchholz. 2000. Introduction to the CoNLL-2000 Shared Task: Chunking. In CoNLL-LLL. M. van Zaanen. 2000. ABL: Alignment-based learning. In COLING. 1086
2011
108
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1087–1097, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Extracting Paraphrases from Definition Sentences on the Web Chikara Hashimoto∗ Kentaro Torisawa† Stijn De Saeger‡ Jun’ichi Kazama§ Sadao Kurohashi¶ ∗† ‡ § National Institute of Information and Communications Technology Kyoto, 619-0237, JAPAN ∗¶Graduate School of Informatics, Kyoto University Kyoto, 606-8501, JAPAN {∗ch,† torisawa, ‡ stijn,§ kazama}@nict.go.jp ¶ [email protected] Abstract We propose an automatic method of extracting paraphrases from definition sentences, which are also automatically acquired from the Web. We observe that a huge number of concepts are defined in Web documents, and that the sentences that define the same concept tend to convey mostly the same information using different expressions and thus contain many paraphrases. We show that a large number of paraphrases can be automatically extracted with high precision by regarding the sentences that define the same concept as parallel corpora. Experimental results indicated that with our method it was possible to extract about 300,000 paraphrases from 6 × 108 Web documents with a precision rate of about 94%. 1 Introduction Natural language allows us to express the same information in many ways, which makes natural language processing (NLP) a challenging area. Accordingly, many researchers have recognized that automatic paraphrasing is an indispensable component of intelligent NLP systems (Iordanskaja et al., 1991; McKeown et al., 2002; Lin and Pantel, 2001; Ravichandran and Hovy, 2002; Kauchak and Barzilay, 2006; Callison-Burch et al., 2006) and have tried to acquire a large amount of paraphrase knowledge, which is a key to achieving robust automatic paraphrasing, from corpora (Lin and Pantel, 2001; Barzilay and McKeown, 2001; Shinyama et al., 2002; Barzilay and Lee, 2003). We propose a method to extract phrasal paraphrases from pairs of sentences that define the same concept. The method is based on our observation that two sentences defining the same concept can be regarded as a parallel corpus since they largely convey the same information using different expressions. Such definition sentences abound on the Web. This suggests that we may be able to extract a large amount of phrasal paraphrase knowledge from the definition sentences on the Web. For instance, the following two sentences, both of which define the same concept “osteoporosis”, include two pairs of phrasal paraphrases, which are indicated by underlines 1⃝and 2⃝, respectively. (1) a. Osteoporosis is a disease that 1⃝decreases the quantity of bone and 2⃝makes bones fragile. b. Osteoporosis is a disease that 1⃝reduces bone mass and 2⃝increases the risk of bone fracture. We define paraphrase as a pair of expressions between which entailment relations of both directions hold. (Androutsopoulos and Malakasiotis, 2010). Our objective is to extract phrasal paraphrases from pairs of sentences that define the same concept. We propose a supervised method that exploits various kinds of lexical similarity features and contextual features. Sentences defining certain concepts are acquired automatically on a large scale from the Web by applying a quite simple supervised method. Previous methods most relevant to our work used parallel corpora such as multiple translations of the same source text (Barzilay and McKeown, 2001) or automatically acquired parallel news texts (Shinyama et al., 2002; Barzilay and Lee, 2003; Dolan et al., 2004). The former requires a large amount of manual labor to translate the same texts 1087 in several ways. The latter would suffer from the fact that it is not easy to automatically retrieve large bodies of parallel news text with high accuracy. On the contrary, recognizing definition sentences for the same concept is quite an easy task at least for Japanese, as we will show, and we were able to find a huge amount of definition sentence pairs from normal Web texts. In our experiments, about 30 million definition sentence pairs were extracted from 6×108 Web documents, and the estimated number of paraphrases recognized in the definition sentences using our method was about 300,000, for a precision rate of about 94%. Also, our experimental results show that our method is superior to well-known competing methods (Barzilay and McKeown, 2001; Koehn et al., 2007) for extracting paraphrases from definition sentence pairs. Our evaluation is based on bidirectional checking of entailment relations between paraphrases that considers the context dependence of a paraphrase. Note that using definition sentences is only the beginning of our research on paraphrase extraction. We have a more general hypothesis that sentences fulfilling the same pragmatic function (e.g. definition) for the same topic (e.g. osteoporosis) convey mostly the same information using different expressions. Such functions other than definition may include the usage of the same Linux command, the recipe for the same cuisine, or the description of related work on the same research issue. Section 2 describes related works. Section 3 presents our proposed method. Section 4 reports on evaluation results. Section 5 concludes the paper. 2 Related Work The existing work for paraphrase extraction is categorized into two groups. The first involves a distributional similarity approach pioneered by Lin and Pantel (2001). Basically, this approach assumes that two expressions that have a large distributional similarity are paraphrases. There are also variants of this approach that address entailment acquisition (Geffet and Dagan, 2005; Bhagat et al., 2007; Szpektor and Dagan, 2008; Hashimoto et al., 2009). These methods can be applied to a normal monolingual corpus, and it has been shown that a large number of paraphrases or entailment rules could be extracted. However, the precision of these methods has been relatively low. This is due to the fact that the evidence, i.e., distributional similarity, is just indirect evidence of paraphrase/entailment. Accordingly, these methods occasionally mistake antonymous pairs for paraphrases/entailment pairs, since an expression and its antonymous counterpart are also likely to have a large distributional similarity. Another limitation of these methods is that they can find only paraphrases consisting of frequently observed expressions since they must have reliable distributional similarity values for expressions that constitute paraphrases. The second category is a parallel corpus approach (Barzilay and McKeown, 2001; Shinyama et al., 2002; Barzilay and Lee, 2003; Dolan et al., 2004). Our method belongs to this category. This approach aligns expressions between two sentences in parallel corpora, based on, for example, the overlap of words/contexts. The aligned expressions are assumed to be paraphrases. In this approach, the expressions do not need to appear frequently in the corpora. Furthermore, the approach rarely mistakes antonymous pairs for paraphrases/entailment pairs. However, its limitation is the difficulty in preparing a large amount of parallel corpora, as noted before. We avoid this by using definition sentences, which can be easily acquired on a large scale from the Web, as parallel corpora. Murata et al. (2004) used definition sentences in two manually compiled dictionaries, which are considerably fewer in the number of definition sentences than those on the Web. Thus, the coverage of their method should be quite limited. Furthermore, the precision of their method is much poorer than ours as we report in Section 4. For a more extensive survey on paraphrasing methods, see Androutsopoulos and Malakasiotis (2010) and Madnani and Dorr (2010). 3 Proposed method Our method, targeting the Japanese language, consists of two steps: definition sentence acquisition and paraphrase extraction. We describe them below. 3.1 Definition sentence acquisition We acquire sentences that define a concept (definition sentences) as in Example (2), which defines “骨 1088 粗鬆症” (osteoporosis), from the 6×108 Web pages (Akamine et al., 2010) and the Japanese Wikipedia. (2) 骨粗鬆症とは、骨がもろくなってしまう病気だ。 (Osteoporosis is a disease that makes bones fragile.) Fujii and Ishikawa (2002) developed an unsupervised method to find definition sentences from the Web using 18 sentential templates and a language model constructed from an encyclopedia. On the other hand, we developed a supervised method to achieve a higher precision. We use one sentential template and an SVM classifier. Specifically, we first collect definition sentence candidates by a template “ˆNP とは.*”, where ˆ is the beginning of sentence and NP is the noun phrase expressing the concept to be defined followed by a particle sequence, “と” (comitative) and “は” (topic) (and optionally followed by comma), as exemplified in (2). As a result, we collected 3,027,101 sentences. Although the particle sequence tends to mark the topic of the definition sentence, it can also appear in interrogative sentences and normal assertive sentences in which a topic is strongly emphasized. To remove such non-definition sentences, we classify the candidate sentences using an SVM classifier with a polynominal kernel (d = 2).1 Since Japanese is a head-final language and we can judge whether a sentence is interrogative or not from the last words in the sentence, we included morpheme N-grams and bag-of-words (with the window size of N) at the end of sentences in the feature set. The features are also useful for confirming that the head verb is in the present tense, which definition sentences should be. Also, we added the morpheme N-grams and bag-of-words right after the particle sequence in the feature set since we observe that non-definition sentences tend to have interrogative related words like “何” (what) or “一体” ((what) on earth) right after the particle sequence. We chose 5 as N from our preliminary experiments. Our training data was constructed from 2,911 sentences randomly sampled from all of the collected sentences. 61.1% of them were labeled as positive. In the 10-fold cross validation, the classifier’s accuracy, precision, recall, and F1 were 89.4, 90.7, 1We use SVMlight available at http://svmlight. joachims.org/. 92.2, and 91.4, respectively. Using the classifier, we acquired 1,925,052 positive sentences from all of the collected sentences. After adding definition sentences from Wikipedia articles, which are typically the first sentence of the body of each article (Kazama and Torisawa, 2007), we obtained a total of 2,141,878 definition sentence candidates, which covered 867,321 concepts ranging from weapons to rules of baseball. Then, we coupled two definition sentences whose defined concepts were the same and obtained 29,661,812 definition sentence pairs. Obviously, our method is tailored to Japanese. For a language-independent method of definition acquisition, see Navigli and Velardi (2010) as an example. 3.2 Paraphrase extraction Paraphrase extraction proceeds as follows. First, each sentence in a pair is parsed by the dependency parser KNP2 and dependency tree fragments that constitute linguistically well-formed constituents are extracted. The extracted dependency tree fragments are called candidate phrases hereafter. We restricted candidate phrases to predicate phrases that consist of at least one dependency relation, do not contain demonstratives, and in which all the leaf nodes are nominal and all of the constituents are consecutive in the sentence. KNP indicates whether each candidate phrase is a predicate based on the POS of the head morpheme. Then, we check all the pairs of candidate phrases between two definition sentences to find paraphrase pairs.3 In (1), repeated in (3), candidate phrase pairs to be checked include ( 1⃝decreases the quantity of bone, 1⃝reduces bone mass), ( 1⃝decreases the quantity of bone, 2⃝increases the risk of bone fracture), ( 2⃝ makes bones fragile, 1⃝reduces bone mass), and ( 2⃝ makes bones fragile, 2⃝increases the risk of bone fracture). (3) a. Osteoporosis is a disease that 1⃝decreases the quantity of bone and 2⃝makes bones fragile. b. Osteoporosis is a disease that 1⃝reduces bone mass and 2⃝increases the risk of bone fracture. 2http://nlp.kuee.kyoto-u.ac.jp/ nl-resource/knp.html. 3Our method discards candidate phrase pairs in which one subsumes the other in terms of their character string, or the difference is only one proper noun like “toner cartridges that Apple Inc. made” and “toner cartridges that Xerox made.” Proper nouns are recognized by KNP. 1089 f1 The ratio of the number of morphemes shared between two candidate phrases to the number of all of the morphemes in the two phrases. f2 The ratio of the number of a candidate phrase’s morphemes, for which there is a morpheme with small edit distance (1 in our experiment) in another candidate phrase, to the number of all of the morphemes in the two phrases. Note that Japanese has many orthographical variations and edit distance is useful for identifying them. f3 The ratio of the number of a candidate phrase’s morphemes, for which there is a morpheme with the same pronunciation in another candidate phrase, to the number of all of the morphemes in the two phrases. Pronunciation is also useful for identifying orthographic variations. Pronunciation is given by KNP. f4 The ratio of the number of morphemes of a shorter candidate phrase to that of a longer one. f5 The identity of the inflected form of the head morpheme between two candidate phrases: 1 if they are identical, 0 otherwise. f6 The identity of the POS of the head morpheme between two candidate phrases: 1 or 0. f7 The identity of the inflection (conjugation) of the head morpheme between two candidate phrases: 1 or 0. f8 The ratio of the number of morphemes that appear in a candidate phrase segment of a definition sentence s1 and in a segment that is NOT a part of the candidate phrase of another definition sentence s2 to the number of all of the morphemes of s1’s candidate phrase, i.e. how many extra morphemes are incorporated into s1’s candidate phrase. f9 The reversed (s1 ↔s2) version of f8. f10 The ratio of the number of parent dependency tree fragments that are shared by two candidate phrases to the number of all of the parent dependency tree fragments of the two phrases. Dependency tree fragments are represented by the pronunciation of their component morphemes. f11 A variation of f10; tree fragments are represented by the base form of their component morphemes. f12 A variation of f10; tree fragments are represented by the POS of their component morphemes. f13 The ratio of the number of unigrams (morphemes) that appear in the child context of both candidate phrases to the number of all of the child context morphemes of both candidate phrases. Unigrams are represented by the pronunciation of the morpheme. f14 A variation of f13; unigrams are represented by the base form of the morpheme. f15 A variation of f14; the numerator is the number of child context unigrams that are adjacent to both candidate phrases. f16 The ratio of the number of trigrams that appear in the child context of both candidate phrases to the number of all of the child context morphemes of both candidate phrases. Trigrams are represented by the pronunciation of the morpheme. f17 Cosine similarity between two definition sentences from which a candidate phrase pair is extracted. Table 1: Features used by paraphrase classifier. The paraphrase checking of candidate phrase pairs is performed by an SVM classifier with a linear kernel that classifies each pair of candidate phrases into a paraphrase or a non-paraphrase.4 Candidate phrase pairs are ranked by their distance from the SVM’s hyperplane. Features for the classifier are based on our observation that two candidate phrases tend to be paraphrases if the candidate phrases themselves are sufficiently similar and/or their surrounding contexts are sufficiently similar. Table 1 lists the features used by the classifier.5 Basically, they represent either the similarity of candidate phrases (f19) or that of their contexts (f10-17). We think that they have various degrees of discriminative power, and thus we use the SVM to adjust their weights. Figure 1 illustrates features f8-12, for which you may need supplemental remarks. English is used for ease of explanation. In the figure, f8 has a positive value since the candidate phrase of s1 contains morphemes “of bone”, which do not appear in the can4We use SVMperf available at http://svmlight. joachims.org/svm perf.html. 5In the table, the parent context of a candidate phrase consists of expressions that appear in ancestor nodes of the candidate phrase in terms of the dependency structure of the sentence. Child contexts are defined similarly. Figure 1: Illustration of features f8-12. didate phrase of s2 but do appear in the other part of s2, i.e. they are extra morphemes for s1’s candidate phrase. On the other hand, f9 is zero since there is no such extra morpheme in s2’s candidate phrase. Also, features f10-12 have positive values since the two candidate phrases share two parent dependency tree fragments, (that increases) and (of fracture). We have also tried the following features, which we do not detail due to space limitation: the similarity of candidate phrases based on semantically similar nouns (Kazama and Torisawa, 2008), entailing/entailed verbs (Hashimoto et al., 2009), and the identity of the pronunciation and base form of the head morpheme; N-grams (N=1,2,3) of child and parent contexts represented by either the inflected form, base form, pronunciation, or POS of mor1090 Original definition sentence pair (s1, s2) Paraphrased definition sentence pair (s′ 1, s′ 2) s1: Osteoporosis is a disease that reduces bone mass and makes bones fragile. s′ 1: Osteoporosis is a disease that decreases the quantity of bone and makes bones fragile. s2: Osteoporosis is a disease that decreases the quantity of bone and increases the risk of bone fracture. s′ 2: Osteoporosis is a disease that reduces bone mass and increases the risk of bone fracture. Figure 2: Bidirectional checking of entailment relation (→) of p1 →p2 and p2 →p1. p1 is “reduces bone mass” in s1 and p2 is “decreases the quantity of bone” in s2. p1 and p2 are exchanged between s1 and s2 to generate corresponding paraphrased sentences s′ 1 and s′ 2. p1 →p2 (p2 →p1) is verified if s1 →s′ 1 (s2 →s′ 2) holds. In this case, both of them hold. English is used for ease of explanation. pheme; parent/child dependency tree fragments represented by either the inflected form, base form, pronunciation, or POS; adjacent versions (cf. f15) of N-gram features and parent/child dependency tree features. These amount to 78 features, but we eventually settled on the 17 features in Table 1 through ablation tests to evaluate the discriminative power of each feature. The ablation tests were conducted using training data that we prepared. In preparing the training data, we faced the problem that the completely random sampling of candidate paraphrase pairs provided us with only a small number of positive examples. Thus, we automatically collected candidate paraphrase pairs that were expected to have a high likelihood of being positive as examples to be labeled. The likelihood was calculated by simply summing all of the 78 feature values that we have tried, since they indicate the likelihood of a given candidate paraphrase pair’s being a paraphrase. Note that values of the features f8 and f9 are weighted with −1, since they indicate the unlikelihood. Specifically, we first randomly sampled 30,000 definition sentence pairs from the 29,661,812 pairs, and collected 3,000 candidate phrase pairs that had the highest likelihood from them. The manual labeling of each candidate phrase pair (p1, p2) was based on bidirectional checking of entailment relation, p1 →p2 and p2 →p1, with p1 and p2 embedded in contexts. This scheme is similar to the one proposed by Szpektor et al. (2007). We adopt this scheme since paraphrase judgment might be unstable between annotators unless they are given a particular context based on which they make a judgment. As described below, we use definition sentences as contexts. We admit that annotators might be biased by this in some unexpected way, but we believe that this is a more stable method than that without contexts. The labeling process is as follows. First, from each candidate phrase pair (p1, p2) and its source definition sentence pair (s1, s2), we create two paraphrase sentence pairs (s′ 1, s′ 2) by exchanging p1 and p2 between s1 and s2. Then, annotators check if s1 entails s′ 1 and s2 entails s′ 2 so that entailment relations of both directions p1 →p2 and p2 →p1 are checked. Figure 2 shows an example of bidirectional checking. In this example, both entailment relations, s1 →s′ 1 and s2 →s′ 2, hold, and thus the candidate phrase pair (p1, p2) is judged as positive. We used (p1, p2), for which entailment relations of both directions held, as positive examples (1,092 pairs) and the others as negative ones (1,872 pairs).6 We built the paraphrase classifier from the training data. As mentioned, candidate phrase pairs were ranked by the distance from the SVM’s hyperplane. 4 Experiment In this paper, our claims are twofold. I. Definition sentences on the Web are a treasure trove of paraphrase knowledge (Section 4.2). II. Our method of paraphrase acquisition from definition sentences is more accurate than wellknown competing methods (Section 4.1). We first verify claim II by comparing our method with that of Barzilay and McKeown (2001) (BM method), Moses7 (Koehn et al., 2007) (SMT method), and that of Murata et al. (2004) (Mrt method). The first two methods are well known for accurately extracting semantically equivalent phrase pairs from parallel corpora.8 Then, we verify claim 6The remaining 36 pairs were discarded as they contained garbled characters of Japanese. 7http://www.statmt.org/moses/ 8As anonymous reviewers pointed out, they are unsupervised methods and thus unable to be adapted to definition sen1091 I by comparing definition sentence pairs with sentence pairs that are acquired from the Web using Yahoo!JAPAN API9 as a paraphrase knowledge source. In the latter data set, two sentences of each pair are expected to be semantically similar regardless of whether they are definition sentences. Both sets contain 100,000 pairs. Three annotators (not the authors) checked evaluation samples. Fleiss’ kappa (Fleiss, 1971) was 0.69 (substantial agreement (Landis and Koch, 1977)). 4.1 Our method vs. competing methods In this experiment, paraphrase pairs are extracted from 100,000 definition sentence pairs that are randomly sampled from the 29,661,812 pairs. Before reporting the experimental results, we briefly describe the BM, SMT, and Mrt methods. BM method Given parallel sentences like multiple translations of the same source text, the BM method works iteratively as follows. First, it collects from the parallel sentences identical word pairs and their contexts (POS N-grams with indices indicating corresponding words between paired contexts) as positive examples and those of different word pairs as negative ones. Then, each context is ranked based on the frequency with which it appears in positive (negative) examples. The most likely K positive (negative) contexts are used to extract positive (negative) paraphrases from the parallel sentences. Extracted positive (negative) paraphrases and their morpho-syntactic patterns are used to collect additional positive (negative) contexts. All the positive (negative) contexts are ranked, and additional paraphrases and their morpho-syntactic patterns are extracted again. This iterative process finishes if no further paraphrase is extracted or the number of iterations reaches a predefined threshold T. In this experiment, following Barzilay and McKeown (2001), K is 10 and N is 1 to 3. The value of T is not given in their paper. We chose 3 as its value based on our preliminary experiments. Note that paraphrases extracted by this method are not ranked. tences. Nevertheless, we believe that comparing these methods with ours is very informative, since they are known to be accurate and have been influential. 9http://developer.yahoo.co.jp/webapi/ SMT method Our SMT method uses Moses (Koehn et al., 2007) and extracts a phrase table, a set of two phrases that are translations of each other, given a set of two sentences that are translations of each other. If you give Moses monolingual parallel sentence pairs, it should extract a set of two phrases that are paraphrases of each other. In this experiment, default values were used for all parameters. To rank extracted phrase pairs, we assigned each of them the product of two phrase translation probabilities of both directions that were given by Moses. For other SMT methods, see Quirk et al. (2004) and Bannard and Callison-Burch (2005) among others. Mrt method Murata et al. (2004) proposed a method to extract paraphrases from two manually compiled dictionaries. It simply regards a difference between two definition sentences of the same word as a paraphrase candidate. Paraphrase candidates are ranked according to an unsupervised scoring scheme that implements their assumption. They assume that a paraphrase candidate tends to be a valid paraphrase if it is surrounded by infrequent strings and/or if it appears multiple times in the data. In this experiment, we evaluated the unsupervised version of our method in addition to the supervised one described in Section 3.2, in order to compare it fairly with the other methods. The unsupervised method works in the same way as the supervised one, except that it ranks candidate phrase pairs by the sum of all 17 feature values, instead of the distance from the SVM’s hyperplane. In other words, no supervised learning is used. All the feature values are weighted with 1, except for f8 and f9, which are weighted with −1 since they indicate the unlikelihood of a candidate phrase pair being paraphrases. BM, SMT, Mrt, and the two versions of our method were used to extract paraphrase pairs from the same 100,000 definition sentence pairs. Evaluation scheme Evaluation of each paraphrase pair (p1, p2) was based on bidirectional checking of entailment relations p1 →p2 and p2 → p1 in a way similar to the labeling of the training data. The difference is that contexts for evaluation are two sentences that are retrieved from the Web and contain p1 and p2, instead of definition sentences from which p1 and p2 are extracted. This 1092 is intended to check whether extracted paraphrases are also valid for contexts other than those from which they are extracted. The evaluation proceeds as follows. For the top m paraphrase pairs of each method (in the case of the BM method, randomly sampled m pairs were used, since the method does not rank paraphrase pairs), we retrieved a sentence pair (s1, s2) for each paraphrase pair (p1, p2) from the Web, such that s1 contains p1 and s2 contains p2. In doing so, we make sure that neither s1 nor s2 are the definition sentences from which p1 and p2 are extracted. For each method, we randomly sample n samples from all of the paraphrase pairs (p1, p2) for which both s1 and s2 are retrieved. Then, from each (p1, p2) and (s1, s2), we create two paraphrase sentence pairs (s′ 1, s′ 2) by exchanging p1 and p2 between s1 and s2. All samples, each consisting of (p1, p2), (s1, s2), and (s′ 1, s′ 2), are checked by three human annotators to determine whether s1 entails s′ 1 and s2 entails s′ 2 so that entailment relations of both directions are verified. In advance of evaluation annotation, all the evaluation samples are shuffled so that the annotators cannot find out which sample is given by which method for fairness. We regard each paraphrase pair as correct if at least two annotators judge that entailment relations of both directions hold for it. You may wonder whether only one pair of sentences (s1, s2) is enough for evaluation since a correct (wrong) paraphrase pair might be judged as wrong (correct) accidentally. Nevertheless, we suppose that the final evaluation results are reliable if the number of evaluation samples is sufficient. In this experiment, m is 5,000 and n is 200. We use Yahoo!JAPAN API to retrieve sentences. Graph (a) in Figure 3 shows a precision curve for each method. Sup and Uns respectively indicate the supervised and unsupervised versions of our method. The figure indicates that Sup outperforms all the others and shows a high precision rate of about 94% at the top 1,000. Remember that this is the result of using 100,000 definition sentence pairs. Thus, we estimate that Sup can extract about 300,000 paraphrase pairs with a precision rate of about 94%, if we use all 29,661,812 definition sentence pairs that we acquired. Furthermore, we measured precision after trivial paraphrase pairs were discarded from the evaluation samples of each method. A candidate phrase pair Definition sentence pairs Sup Uns BM SMT Mrt with trivial 1,381,424 24,049 9,562 18,184 without trivial 1,377,573 23,490 7,256 18,139 Web sentence pairs Sup Uns BM SMT Mrt with trivial 277,172 5,101 4,586 4,978 without trivial 274,720 4,399 2,342 4,958 Table 2: Number of extracted paraphrases. (p1, p2) is regarded as trivial if the pronunciation is the same between p1 and p2,10 or all of the content words contained in p1 are the same as those of p2. Graph (b) gives a precision curve for each method. Again, Sup outperforms the others too, and maintains a precision rate of about 90% until the top 1,000. These results support our claim II. The upper half of Table 2 shows the number of extracted paraphrases with/without trivial pairs for each method.11 Sup and Uns extracted many more paraphrases. It is noteworthy that Sup performed the best in terms of both precision rate and the number of extracted paraphrases. Table 3 shows examples of correct and incorrect outputs of Sup. As the examples indicate, many of the extracted paraphrases are not specific to definition sentences and seem very reusable. However, there are few paraphrases involving metaphors or idioms in the outputs due to the nature of definition sentences. In this regard, we do not claim that our method is almighty. We agree with Sekine (2005) who claims that several different methods are required to discover a wider variety of paraphrases. In graphs (a) and (b), the precision of the SMT method goes up as rank goes down. This strange behavior is due to the scoring by Moses that worked poorly for the data; it gave 1.0 to 82.5% of all the samples, 38.8% of which were incorrect. We suspect SMT methods are poor at monolingual alignment for paraphrasing or entailment tasks since, in the tasks, data is much noisier than that used for SMT. See MacCartney et al. (2008) for similar discussion. 4.2 Definition pairs vs. Web sentence pairs To collect Web sentence pairs, first, we randomly sampled 1.8 million sentences from the Web corpus. 10There are many kinds of orthographic variants in Japanese, which can be identified by their pronunciation. 11We set no threshold for candidate phrase pairs of each method, and counted all the candidate phrase pairs in Table 2. 1093 0 0.2 0.4 0.6 0.8 1 0 1000 2000 3000 4000 5000 Precision Top-N ’Sup_def’ ’Uns_def’ ’SMT_def’ ’BM_def’ ’Mrt_def’ 0 0.2 0.4 0.6 0.8 1 0 1000 2000 3000 4000 5000 Precision Top-N ’Sup_def_n’ ’Uns_def_n’ ’SMT_def_n’ ’BM_def_n’ ’Mrt_def_n’ (a) Definition sentence pairs with trivial paraphrases (b) Definition sentence pairs without trivial paraphrases 0 0.2 0.4 0.6 0.8 1 0 1000 2000 3000 4000 5000 Precision Top-N ’Sup_www’ ’Uns_www’ ’SMT_www’ ’BM_www’ ’Mrt_www’ 0 0.2 0.4 0.6 0.8 1 0 1000 2000 3000 4000 5000 Precision Top-N ’Sup_www_n’ ’Uns_www_n’ ’SMT_www_n’ ’BM_www_n’ ’Mrt_www_n’ (c) Web sentence pairs with trivial paraphrases (d) Web sentence pairs without trivial paraphrases Figure 3: Precision curves of paraphrase extraction. Rank Paraphrase pair Correct 13 メールアドレスにメールを送る(send a message to the e-mail address) ⇔メールアドレスに電子メールを送る(send an e-mail message to the e-mail address) 19 お客様の依頼による(requested by a customer) ⇔お客様の委託による(commissioned by a customer) 70 企業の財政状況を表す(describe the fiscal condition of company) ⇔企業の財政状態を示す(indicate the fiscal state of company) 112 インフォメーションを得る(get information) ⇔ニュースを得る(get news) 656 きまりのことです(it is a convention) ⇔ルールのことです(it is a rule) 841 地震のエネルギー規模をあらわす(represent the energy scale of earthquake) ⇔地震の規模を表す(represent the scale of earthquake) 929 細胞を酸化させる(cause the oxidation of cells) ⇔細胞を老化させる(cause cellular aging) 1,553 角質を取り除く(remove dead skin cells) ⇔角質をはがす(peel off dead skin cells) 2,243 胎児の発育に必要だ(required for the development of fetus) ⇔胎児の発育成長に必要不可欠だ(indispensable for the growth and development of fetus) 2,855 視力を矯正する(correct eyesight) ⇔視力矯正を行う(perform eyesight correction) 2,931 チャラにしてもらう(call it even) ⇔帳消しにしてもらう(call it quits) 3,667 ハードディスク上に蓄積される(accumulated on a hard disk) ⇔ハードディスクドライブに保存される(stored on a hard disk drive) 4,870 有害物質を排泄する(excrete harmful substance) ⇔有害毒素を排出する(discharge harmful toxin) 5,501 1つのCPUの内部に2つのプロセッサコアを搭載する(mount two processor cores on one CPU) ⇔1つのパッケー ジに2つのプロセッサコアを集積する(build two processor cores into one package) 10,675 外貨を売買する(trade foreign currencies) ⇔通貨を交換する(exchange one currency for another) 112,819 派遣先企業の社員になる(become a regular staff member of the company where (s)he has worked as a temp) ⇔派遣 先に直接雇用される(employed by the company where (s)he has worked as a temp) 193,553 Webサイトにアクセスする(access Web sites) ⇔WWWサイトを訪れる(visit WWW sites) Incorrect 903 ブラウザに送信される(send to a Web browser) ⇔パソコンに送信される(send to a PC) 2,530 調和をはかる(intend to balance) ⇔リフレッシュを図る(intend to refresh) 3,008 消化酵素では消化できない(unable to digest with digestive enzymes) ⇔消化酵素で消化され難い(hard to digest with digestive enzymes) Table 3: Examples of correct and incorrect paraphrases extracted by our supervised method with their rank. 1094 We call them sampled sentences. Then, using Yahoo!JAPAN API, we retrieved up to 20 snippets relevant to each sampled sentence using all of the nouns in each sentence as a query. After that, each snippet was split into sentences, which we call snippet sentences. We paired a sampled sentence and a snippet sentence that was the most similar to the sampled sentence. Similarity is the number of nouns shared by the two sentences. Finally, we randomly sampled 100,000 pairs from all the pairs. Paraphrase pairs were extracted from the Web sentence pairs by using BM, SMT, Mrt and the supervised and unsupervised versions of our method. The features used with our methods were selected from all of the 78 features mentioned in Section 3.2 so that they performed well for Web sentence pairs. Specifically, the features were selected by ablation tests using training data that was tailored to Web sentence pairs. The training data consisted of 2,741 sentence pairs that were collected in the same way as the Web sentence pairs and was labeled in the same way as described in Section 3.2. Graph (c) of Figure 3 shows precision curves. We also measured precision without trivial pairs in the same way as the previous experiment. Graph (d) shows the results. The lower half of Table 2 shows the number of extracted paraphrases with/without trivial pairs for each method. Note that precision figures of our methods in graphs (c) and (d) are lower than those of our methods in graphs (a) and (b). Additionally, none of the methods achieved a precision rate of 90% using Web sentence pairs.12 We think that a precision rate of at least 90% would be necessary if you apply automatically extracted paraphrases to NLP tasks without manual annotation. Only the combination of Sup and definition sentence pairs achieved that precision. Also note that, for all of the methods, the numbers of extracted paraphrases from Web sentence pairs are fewer than those from definition sentence pairs. From all of these results, we conclude that our claim I is verified. 12Precision of SMT is unexpectedly good. We found some Web sentence pairs consisting of two mostly identical sentences on rare occasions. The method worked relatively well for them. 5 Conclusion We proposed a method of extracting paraphrases from definition sentences on the Web. From the experimental results, we conclude that the following two claims of this paper are verified. 1. Definition sentences on the Web are a treasure trove of paraphrase knowledge. 2. Our method extracts many paraphrases from the definition sentences on the Web accurately; it can extract about 300,000 paraphrases from 6 × 108 Web documents with a precision rate of about 94%. Our future work is threefold. First, we will release extracted paraphrases from all of the 29,661,812 definition sentence pairs that we acquired, after human annotators check their validity. The result will be available through the ALAGIN forum.13 Second, we plan to induce paraphrase rules from paraphrase instances. Though our method can extract a variety of paraphrase instances on a large scale, their coverage might be insufficient for real NLP applications since some paraphrase phenomena are highly productive. Therefore, we need paraphrase rules in addition to paraphrase instances. Barzilay and McKeown (2001) induced simple POS-based paraphrase rules from paraphrase instances, which can be a good starting point. Finally, as mentioned in Section 1, the work in this paper is only the beginning of our research on paraphrase extraction. We are trying to extract far more paraphrases from a set of sentences fulfilling the same pragmatic function (e.g. definition) for the same topic (e.g. osteoporosis) on the Web. Such functions other than definition may include the usage of the same Linux command, the recipe for the same cuisine, or the description of related work on the same research issue. Acknowledgments We would like to thank Atsushi Fujita, Francis Bond, and all of the members of the Information Analysis Laboratory, Universal Communication Research Institute at NICT. 13http://alagin.jp/ 1095 References Susumu Akamine, Daisuke Kawahara, Yoshikiyo Kato, Tetsuji Nakagawa, Yutaka I. Leon-Suematsu, Takuya Kawada, Kentaro Inui, Sadao Kurohashi, and Yutaka Kidawara. 2010. Organizing information on the web to support user judgments on information credibility. In Proceedings of 2010 4th International Universal Communication Symposium Proceedings (IUCS 2010), pages 122–129. Ion Androutsopoulos and Prodromos Malakasiotis. 2010. A survey of paraphrasing and textual entailment methods. Journal of Artificial Intelligence Research, 38:135–187. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL-2005), pages 597– 604. Regina Barzilay and Lillian Lee. 2003. Learning to paraphrase: An unsupervised approach using multiplesequence alignment. In Proceedings of HLT-NAACL 2003, pages 16–23. Regina Barzilay and Kathleen R. McKeown. 2001. Extracting paraphrases from a parallel corpus. In Proceedings of the 39th Annual Meeting of the ACL joint with the 10th Meeting of the European Chapter of the ACL (ACL/EACL 2001), pages 50–57. Rahul Bhagat, Patrick Pantel, and Eduard Hovy. 2007. Ledir: An unsupervised algorithm for learning directionality of inference rules. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP2007), pages 161–170. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of the 2006 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2006), pages 17–24. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics (COLING 2004), pages 350– 356. Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5):378–382. Atsushi Fujii and Tetsuya Ishikawa. 2002. Extraction and organization of encyclopedic knowledge information using the World Wide Web (written in Japanese). Institute of Electronics, Information, and Communication Engineers, J85-D-II(2):300–307. Maayan Geffet and Ido Dagan. 2005. The distributional inclusion hypotheses and lexical entailment. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), pages 107–114. Chikara Hashimoto, Kentaro Torisawa, Kow Kuroda, Stijn De Saeger, Masaki Murata, and Jun’ichi Kazama. 2009. Large-scale verb entailment acquisition from the web. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP 2009), pages 1172–1181. Lidija Iordanskaja, Richard Kittredge, and Alain Polgu`ere. 1991. Lexical selection and paraphrase in a meaning-text generation model. In C´ecile L. Paris, William R. Swartout, and William C. Mann, editors, Natural language generation in artificial intelligence and computational linguistics, pages 293–312. Kluwer Academic Press. David Kauchak and Regina Barzilay. 2006. Paraphrasing for automatic evaluation. In Proceedings of the 2006 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2006), pages 455–462. Jun’ichi Kazama and Kentaro Torisawa. 2007. Exploiting Wikipedia as external knowledge for named entity recognition. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007), pages 698–707, June. Jun’ichi Kazama and Kentaro Torisawa. 2008. Inducing gazetteers for named entity recognition by large-scale clustering of dependency relations. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL-08: HLT), pages 407–415. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL 2007), pages 177–180. J. Richard Landis and Gary G. Koch. 1977. The measurement of observer agreement for categorical data. Biometrics, 33(1):159–174. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. Natural Language Engineering, 7(4):343–360. Bill MacCartney, Michel Galley, and Christopher D. Manning. 2008. A phrase-based alignment model for natural language inference. In Proceedings of the 2008 1096 Conference on Empirical Methods in Natural Language Processing (EMNLP-2008), pages 802–811. Nitin Madnani and Bonnie Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of datadriven methods. Computational Linguistics, 36(3). Kathleen R. McKeown, Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and summarizing news on a daily basis with columbia’s newsblaster. In Proceedings of the 2nd international conference on Human Language Technology Research, pages 280–285. Masaki Murata, Toshiyuki Kanemaru, and Hitoshi Isahara. 2004. Automatic paraphrase acquisition based on matching of definition sentences in plural dictionaries (written in Japanese). Journal of Natural Language Processing, 11(5):135–149. Roberto Navigli and Paola Velardi. 2010. Learning word-class lattices for definition and hypernym extraction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 1318–1327. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual machine translation for paraphrase generation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP-2004), pages 142–149. Deepak Ravichandran and Eduard H. Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002), pages 41–47. Satoshi Sekine. 2005. Automatic paraphrase discovery based on context and keywords between ne pairs. In Proceedings of the Third International Workshop on Paraphrasing (IWP-2005), pages 80–87. Yusuke Shinyama, Satoshi Sekine, and Kiyoshi Sudo. 2002. Automatic paraphrase acquisition from news articles. In Proceedings of the 2nd international Conference on Human Language Technology Research (HLT2002), pages 313–318. Idan Szpektor and Ido Dagan. 2008. Learning entailment rules for unary template. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING2008), pages 849–856. Idan Szpektor, Eyal Shnarch, and Ido Dagan. 2007. Instance-based evaluation of entailment rule acquisition. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007), pages 456–463. 1097
2011
109
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 102–111, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Joint Annotation of Search Queries Michael Bendersky Dept. of Computer Science University of Massachusetts Amherst, MA [email protected] W. Bruce Croft Dept. of Computer Science University of Massachusetts Amherst, MA [email protected] David A. Smith Dept. of Computer Science University of Massachusetts Amherst, MA [email protected] Abstract Marking up search queries with linguistic annotations such as part-of-speech tags, capitalization, and segmentation, is an important part of query processing and understanding in information retrieval systems. Due to their brevity and idiosyncratic structure, search queries pose a challenge to existing NLP tools. To address this challenge, we propose a probabilistic approach for performing joint query annotation. First, we derive a robust set of unsupervised independent annotations, using queries and pseudo-relevance feedback. Then, we stack additional classifiers on the independent annotations, and exploit the dependencies between them to further improve the accuracy, even with a very limited amount of available training data. We evaluate our method using a range of queries extracted from a web search log. Experimental results verify the effectiveness of our approach for both short keyword queries, and verbose natural language queries. 1 Introduction Automatic mark-up of textual documents with linguistic annotations such as part-of-speech tags, sentence constituents, named entities, or semantic roles is a common practice in natural language processing (NLP). It is, however, much less common in information retrieval (IR) applications. Accordingly, in this paper, we focus on annotating search queries submitted by the users to a search engine. There are several key differences between user queries and the documents used in NLP (e.g., news articles or web pages). As previous research shows, these differences severely limit the applicability of standard NLP techniques for annotating queries and require development of novel annotation approaches for query corpora (Bergsma and Wang, 2007; Barr et al., 2008; Lu et al., 2009; Bendersky et al., 2010; Li, 2010). The most salient difference between queries and documents is their length. Most search queries are very short, and even longer queries are usually shorter than the average written sentence. Due to their brevity, queries often cannot be divided into sub-parts, and do not provide enough context for accurate annotations to be made using the standard NLP tools such as taggers, parsers or chunkers, which are trained on more syntactically coherent textual units. A recent analysis of web query logs by Bendersky and Croft (2009) shows, however, that despite their brevity, queries are grammatically diverse. Some queries are keyword concatenations, some are semicomplete verbal phrases and some are wh-questions. It is essential for the search engine to correctly annotate the query structure, and the quality of these query annotations has been shown to be a crucial first step towards the development of reliable and robust query processing, representation and understanding algorithms (Barr et al., 2008; Guo et al., 2008; Guo et al., 2009; Manshadi and Li, 2009; Li, 2010). However, in current query annotation systems, even sentence-like queries are often hard to parse and annotate, as they are prone to contain misspellings and idiosyncratic grammatical structures. 102 (a) (b) (c) Term CAP TAG SEG who L X B won L V I the L X B 2004 L X B kentucky C N B derby C N I Term CAP TAG SEG kindred C N B where C X B would C X I i C X I be C V I Term CAP TAG SEG shih C N B tzu C N I health L N B problems L N I Figure 1: Examples of a mark-up scheme for annotating capitalization (L – lowercase, C – otherwise), POS tags (N – noun, V – verb, X – otherwise) and segmentation (B/I – beginning of/inside the chunk). They also tend to lack prepositions, proper punctuation, or capitalization, since users (often correctly) assume that these features are disregarded by the retrieval system. In this paper, we propose a novel joint query annotation method to improve the effectiveness of existing query annotations, especially for longer, more complex search queries. Most existing research focuses on using a single type of annotation for information retrieval such as subject-verb-object dependencies (Balasubramanian and Allan, 2009), namedentity recognition (Guo et al., 2009), phrase chunking (Guo et al., 2008), or semantic labeling (Li, 2010). In contrast, the main focus of this work is on developing a unified approach for performing reliable annotations of different types. To this end, we propose a probabilistic method for performing a joint query annotation. This method allows us to exploit the dependency between different unsupervised annotations to further improve the accuracy of the entire set of annotations. For instance, our method can leverage the information about estimated partsof-speech tags and capitalization of query terms to improve the accuracy of query segmentation. We empirically evaluate the joint query annotation method on a range of query types. Instead of just focusing our attention on keyword queries, as is often done in previous work (Barr et al., 2008; Bergsma and Wang, 2007; Tan and Peng, 2008; Guo et al., 2008), we also explore the performance of our annotations with more complex natural language search queries such as verbal phrases and whquestions, which often pose a challenge for IR applications (Bendersky et al., 2010; Kumaran and Allan, 2007; Kumaran and Carvalho, 2009; Lease, 2007). We show that even with a very limited amount of training data, our joint annotation method significantly outperforms annotations that were done independently for these queries. The rest of the paper is organized as follows. In Section 2 we demonstrate several examples of annotated search queries. Then, in Section 3, we introduce our joint query annotation method. In Section 4 we describe two types of independent query annotations that are used as input for the joint query annotation. Section 5 details the related work and Section 6 presents the experimental results. We draw the conclusions from our work in Section 7. 2 Query Annotation Example To demonstrate a possible implementation of linguistic annotation for search queries, Figure 1 presents a simple mark-up scheme, exemplified using three web search queries (as they appear in a search log): (a) who won the 2004 kentucky derby, (b) kindred where would i be, and (c) shih tzu health problems. In this scheme, each query is markedup using three annotations: capitalization, POS tags, and segmentation indicators. Note that all the query terms are non-capitalized, and no punctuation is provided by the user, which complicates the query annotation process. While the simple annotation described in Figure 1 can be done with a very high accuracy for standard document corpora, both previous work (Barr et al., 2008; Bergsma and Wang, 2007; Jones and Fain, 2003) and the experimental results in this paper indicate that it is challenging to perform well on queries. The queries in Figure 1 illustrate this point. Query (a) in Figure 1 is a wh-question, and it contains 103 a capitalized concept (“Kentucky Derby”), a single verb, and four segments. Query (b) is a combination of an artist name and a song title and should be interpreted as Kindred — “Where Would I Be”. Query (c) is a concatenation of two short noun phrases: “Shih Tzu” and “health problems”. 3 Joint Query Annotation Given a search query Q, which consists of a sequence of terms (q1, . . . , qn), our goal is to annotate it with an appropriate set of linguistic structures ZQ. In this work, we assume that the set ZQ consists of shallow sequence annotations zQ, each of which takes the form zQ = (ζ1, . . . , ζn). In other words, each symbol ζi ∈zQ annotates a single query term. Many query annotations that are useful for IR can be represented using this simple form, including capitalization, POS tagging, phrase chunking, named entity recognition, and stopword indicators, to name just a few. For instance, Figure 1 demonstrates an example of a set of annotations ZQ. In this example, ZQ = {CAP, TAG, SEG}. Most previous work on query annotation makes the independence assumption — every annotation zQ ∈ZQ is done separately from the others. That is, it is assumed that the optimal linguistic annotation z∗(I) Q is the annotation that has the highest probability given the query Q, regardless of the other annotations in the set ZQ. Formally, z∗(I) Q = argmax zQ p(zQ|Q) (1) The main shortcoming of this approach is in the assumption that the linguistic annotations in the set ZQ are independent. In practice, there are dependencies between the different annotations, and they can be leveraged to derive a better estimate of the entire set of annotations. For instance, imagine that we need to perform two annotations: capitalization and POS tagging. Knowing that a query term is capitalized, we are more likely to decide that it is a proper noun. Vice versa, knowing that it is a preposition will reduce its probability of being capitalized. We would like to capture this intuition in the annotation process. To address the problem of joint query annotation, we first assume that we have an initial set of annotations Z∗(I) Q , which were performed for query Q independently of one another (we will show an example of how to derive such a set in Section 4). Given the initial set Z∗(I) Q , we are interested in obtaining an annotation set Z∗(J) Q , which jointly optimizes the probability of all the annotations, i.e. Z∗(J) Q = argmax ZQ p(ZQ|Z∗(I) Q ). If the initial set of estimations is reasonably accurate, we can make the assumption that the annotations in the set Z∗(J) Q are independent given the initial estimates Z∗(I) Q , allowing us to separately optimize the probability of each annotation z∗(J) Q ∈ Z∗(J) Q : z∗(J) Q = argmax zQ p(zQ|Z∗(I) Q ). (2) From Eq. 2, it is evident that the joint annotation task becomes that of finding some optimal unobserved sequence (annotation z∗(J) Q ), given the observed sequences (independent annotation set Z∗(I) Q ). Accordingly, we can directly use a supervised sequential probabilistic model such as CRF (Lafferty et al., 2001) to find the optimal z∗(J) Q . In this CRF model, the optimal annotation z∗(J) Q is the label we are trying to predict, and the set of independent annotations Z∗(I) Q is used as the basis for the features used for prediction. Figure 2 outlines the algorithm for performing the joint query annotation. As input, the algorithm receives a training set of queries and their ground truth annotations. It then produces a set of independent annotation estimates, which are jointly used, together with the ground truth annotations, to learn a CRF model for each annotation type. Finally, these CRF models are used to predict annotations on a held-out set of queries, which are the output of the algorithm. 104 Input: Qt — training set of queries. ZQt — ground truth annotations for the training set of queries. Qh — held-out set of queries. (1) Obtain a set of independent annotation estimates Z∗(I) Qt (2) Initialize Z∗(J) Qt ←∅ (3) for each z∗(I) Qt ∈Z∗(I) Qt : (4) Z′ Qt ←Z∗(I) Qt \ z∗(I) Qt (5) Train a CRF model CRF(zQt) using zQt as a label and Z′ Qt as features. (6) Predict annotation z∗(J) Qh , using CRF(zQt). (7) Z∗(J) Qh ←Z∗(J) Qh ∪z∗(J) Qh . Output: Z∗(J) Qh — predicted annotations for the held-out set of queries. Figure 2: Algorithm for performing joint query annotation. Note that this formulation of joint query annotation can be viewed as a stacked classification, in which a second, more effective, classifier is trained using the labels inferred by the first classifier as features. Stacked classifiers were recently shown to be an efficient and effective strategy for structured classification in NLP (Nivre and McDonald, 2008; Martins et al., 2008). 4 Independent Query Annotations While the joint annotation method proposed in Section 3 is general enough to be applied to any set of independent query annotations, in this work we focus on two previously proposed independent annotation methods based on either the query itself, or the top sentences retrieved in response to the query (Bendersky et al., 2010). The main benefits of these two annotation methods are that they can be easily implemented using standard software tools, do not require any labeled data, and provide reasonable annotation accuracy. Next, we briefly describe these two independent annotation methods. 4.1 Query-based estimation The most straightforward way to estimate the conditional probabilities in Eq. 1 is using the query itself. To make the estimation feasible, Bendersky et al. (2010) take a bag-of-words approach, and assume independence between both the query terms and the corresponding annotation symbols. Thus, the indepentent annotations in Eq. 1 are given by z∗(QRY ) Q = argmax (ζ1,...,ζn) Y i∈(1,...,n) p(ζi|qi). (3) Following Bendersky et al. (2010) we use a large n-gram corpus (Brants and Franz, 2006) to estimate p(ζi|qi) for annotating the query with capitalization and segmentation mark-up, and a standard POS tagger1 for part-of-speech tagging of the query. 4.2 PRF-based estimation Given a short, often ungrammatical query, it is hard to accurately estimate the conditional probability in Eq. 1 using the query terms alone. For instance, a keyword query hawaiian falls, which refers to a location, is inaccurately interpreted by a standard POS tagger as a noun-verb pair. On the other hand, given a sentence from a corpus that is relevant to the query such as “Hawaiian Falls is a family-friendly waterpark”, the word “falls” is correctly identified by a standard POS tagger as a proper noun. Accordingly, the document corpus can be bootstrapped in order to better estimate the query annotation. To this end, Bendersky et al. (2010) employ the pseudo-relevance feedback (PRF) — a method that has a long record of success in IR for tasks such as query expansion (Buckley, 1995; Lavrenko and Croft, 2001). In the most general form, given the set of all retrievable sentences r in the corpus C one can derive p(zQ|Q) = X r∈C p(zQ|r)p(r|Q). Since for most sentences the conditional probability of relevance to the query p(r|Q) is vanishingly small, the above can be closely approximated 1http://crftagger.sourceforge.net/ 105 by considering only a set of sentences R, retrieved at top-k positions in response to the query Q. This yields p(zQ|Q) ≈ X r∈R p(zQ|r)p(r|Q). Intuitively, the equation above models the query as a mixture of top-k retrieved sentences, where each sentence is weighted by its relevance to the query. Furthermore, to make the estimation of the conditional probability p(zQ|r) feasible, it is assumed that the symbols ζi in the annotation sequence are independent, given a sentence r. Note that this assumption differs from the independence assumption in Eq. 3, since here the annotation symbols are not independent given the query Q. Accordingly, the PRF-based estimate for independent annotations in Eq. 1 is z∗(P RF ) Q = argmax (ζ1,...,ζn) X r∈R Y i∈(1,...,n) p(ζi|r)p(r|Q). (4) Following Bendersky et al. (2010), an estimate of p(ζi|r) is a smoothed estimator that combines the information from the retrieved sentence r with the information about unigrams (for capitalization and POS tagging) and bigrams (for segmentation) from a large n-gram corpus (Brants and Franz, 2006). 5 Related Work In recent years, linguistic annotation of search queries has been receiving increasing attention as an important step toward better query processing and understanding. The literature on query annotation includes query segmentation (Bergsma and Wang, 2007; Jones et al., 2006; Guo et al., 2008; Hagen et al., 2010; Hagen et al., 2011; Tan and Peng, 2008), part-of-speech and semantic tagging (Barr et al., 2008; Manshadi and Li, 2009; Li, 2010), namedentity recognition (Guo et al., 2009; Lu et al., 2009; Shen et al., 2008; Pas¸ca, 2007), abbreviation disambiguation (Wei et al., 2008) and stopword detection (Lo et al., 2005; Jones and Fain, 2003). Most of the previous work on query annotation focuses on performing a particular annotation task (e.g., segmentation or POS tagging) in isolation. However, these annotations are often related, and thus we take a joint annotation approach, which combines several independent annotations to improve the overall annotation accuracy. A similar approach was recently proposed by Guo et al. (2008). There are several key differences, however, between the work presented here and their work. First, Guo et al. (2008) focus on query refinement (spelling corrections, word splitting, etc.) of short keyword queries. Instead, we are interested in annotation of queries of different types, including verbose natural language queries. While there is an overlap between query refinement and annotation, the focus of the latter is on providing linguistic information about existing queries (after initial refinement has been performed). Such information is especially important for more verbose and gramatically complex queries. In addition, while all the methods proposed by Guo et al. (2008) require large amounts of training data (thousands of training examples), our joint annotation method can be effectively trained with a minimal human labeling effort (several hundred training examples). An additional research area which is relevant to this paper is the work on joint structure modeling (Finkel and Manning, 2009; Toutanova et al., 2008) and stacked classification (Nivre and McDonald, 2008; Martins et al., 2008) in natural language processing. These approaches have been shown to be successful for tasks such as parsing and named entity recognition in newswire data (Finkel and Manning, 2009) or semantic role labeling in the Penn Treebank and Brown corpus (Toutanova et al., 2008). Similarly to this work in NLP, we demonstrate that a joint approach for modeling the linguistic query structure can also be beneficial for IR applications. 6 Experiments 6.1 Experimental Setup For evaluating the performance of our query annotation methods, we use a random sample of 250 queries2 from a search log. This sample is manually labeled with three annotations: capitalization, POS tags, and segmentation, according to the description of these annotations in Figure 1. In this set of 250 queries, there are 93 questions, 96 phrases contain2The annotations are available at http://ciir.cs.umass.edu/∼bemike/data.html 106 CAP F1 (% impr) MQA (% impr) i-QRY 0.641 (-/-) 0.779 (-/-) i-PRF 0.711∗(+10.9/-) 0.811∗(+4.1/-) j-QRY 0.620†(-3.3/-12.8) 0.805∗(+3.3/-0.7) j-PRF 0.718∗(+12.0/+0.9) 0.840∗ †(+7.8/+3.6) TAG Acc. (% impr) MQA (% impr) i-QRY 0.893 (-/-) 0.878 (-/-) i-PRF 0.916∗(+2.6/-) 0.914∗(+4.1/-) j-QRY 0.913∗(+2.2/-0.3) 0.912∗(+3.9/-0.2) j-PRF 0.924∗(+3.5/+0.9) 0.922∗(+5.0/+0.9) SEG F1 (% impr) MQA (% impr) i-QRY 0.694 (-/-) 0.672 (-/-) i-PRF 0.753∗(+8.5/-) 0.710∗(+5.7/-) j-QRY 0.817∗ †(+17.7/+8.5) 0.803∗ †(+19.5/+13.1) j-PRF 0.819∗ †(+18.0/+8.8) 0.803∗ †(+19.5/+13.1) Table 1: Summary of query annotation performance for capitalization (CAP), POS tagging (TAG) and segmentation. Numbers in parentheses indicate % of improvement over the i-QRY and i-PRF baselines, respectively. Best result per measure and annotation is boldfaced. ∗and † denote statistically significant differences with i-QRY and i-PRF, respectively. ing a verb, and 61 short keyword queries (Figure 1 contains a single example of each of these types). In order to test the effectiveness of the joint query annotation, we compare four methods. In the first two methods, i-QRY and i-PRF the three annotations are done independently. Method i-QRY is based on z∗(QRY ) Q estimator (Eq. 3). Method i-PRF is based on the z∗(P RF ) Q estimator (Eq. 4). The next two methods, j-QRY and j-PRF, are joint annotation methods, which perform a joint optimization over the entire set of annotations, as described in the algorithm in Figure 2. j-QRY and j-PRF differ in their choice of the initial independent annotation set Z∗(I) Q in line (1) of the algorithm (see Figure 2). j-QRY uses only the annotations performed by iQRY (3 initial independent annotation estimates), while j-PRF combines the annotations performed by i-QRY with the annotations performed by i-PRF (6 initial annotation estimates). The CRF model training in line (6) of the algorithm is implemented using CRF++ toolkit3. 3http://crfpp.sourceforge.net/ The performance of the joint annotation methods is estimated using a 10-fold cross-validation. In order to test the statistical significance of improvements attained by the proposed methods we use a two-sided Fisher’s randomization test with 20,000 permutations. Results with p-value < 0.05 are considered statistically significant. For reporting the performance of our methods we use two measures. The first measure is classification-oriented — treating the annotation decision for each query term as a classification. In case of capitalization and segmentation annotations these decisions are binary and we compute the precision and recall metrics, and report F1 — their harmonic mean. In case of POS tagging, the decisions are ternary, and hence we report the classification accuracy. We also report an additional, IR-oriented performance measure. As is typical in IR, we propose measuring the performance of the annotation methods on a per-query basis, to verify that the methods have uniform impact across queries. Accordingly, we report the mean of classification accuracies per query (MQA). Formally, MQA is computed as PN i=1 accQi N , where accQi is the classification accuracy for query Qi, and N is the number of queries. The empirical evaluation is conducted as follows. In Section 6.2, we discuss the general performance of the four annotation techniques, and compare the effectiveness of independent and joint annotations. In Section 6.3, we analyze the performance of the independent and joint annotation methods by query type. In Section 6.4, we compare the difficulty of performing query annotations for different query types. Finally, in Section 6.5, we compare the effectiveness of the proposed joint annotation for query segmentation with the existing query segmentation methods. 6.2 General Evaluation Table 1 shows the summary of the performance of the two independent and two joint annotation methods for the entire set of 250 queries. For independent methods, we see that i-PRF outperforms i-QRY for 107 CAP Verbal Phrases Questions Keywords F1 MQA F1 MQA F1 MQA i-PRF 0.750 0.862 0.590 0.839 0.784 0.687 j-PRF 0.687∗(-8.4%) 0.839∗(-2.7%) 0.671∗(+13.7%) 0.913∗(+8.8%) 0.814 (+3.8%) 0.732∗(+6.6%) TAG Verbal Phrases Questions Keywords Acc. MQA Acc. MQA Acc. MQA i-PRF 0.908 0.908 0.932 0.935 0.880 0.890 j-PRF 0.904 (-0.4%) 0.906 (-0.2%) 0.951∗(+2.1%) 0.953∗(+1.9%) 0.893 (+1.5%) 0.900 (+1.1%) SEG Verbal Phrases Questions Keywords F1 MQA F1 MQA F1 MQA i-PRF 0.751 0.700 0.740 0.700 0.816 0.747 j-PRF 0.772 (+2.8%) 0.742∗(+6.0%) 0.858∗(+15.9%) 0.838∗(+19.7%) 0.844 (+3.4%) 0.853∗(+14.2%) Table 2: Detailed analysis of the query annotation performance for capitalization (CAP), POS tagging (TAG) and segmentation by query type. Numbers in parentheses indicate % of improvement over the i-PRF baseline. Best result per measure and annotation is boldfaced. ∗denotes statistically significant differences with i-PRF. all annotation types, using both performance measures. In Table 1, we can also observe that the joint annotation methods are, in all cases, better than the corresponding independent ones. The highest improvements are attained by j-PRF, which always demonstrates the best performance both in terms of F1 and MQA. These results attest to both the importance of doing a joint optimization over the entire set of annotations and to the robustness of the initial annotations done by the i-PRF method. In all but one case, the j-PRF method, which uses these annotations as features, outperforms the j-QRY method that only uses the annotation done by i-QRY. The most significant improvements as a result of joint annotation are observed for the segmentation task. In this task, joint annotation achieves close to 20% improvement in MQA over the i-QRY method, and more than 10% improvement in MQA over the iPRF method. These improvements indicate that the segmentation decisions are strongly guided by capitalization and POS tagging. We also note that, in case of segmentation, the differences in performance between the two joint annotation methods, j-QRY and j-PRF, are not significant, indicating that the context of additional annotations in j-QRY makes up for the lack of more robust pseudo-relevance feedback based features. We also note that the lowest performance improvement as a result of joint annotation is evidenced for POS tagging. The improvements of joint annotation method j-PRF over the i-PRF method are less than 1%, and are not statistically significant. This is not surprising, since the standard POS taggers often already use bigrams and capitalization at training time, and do not acquire much additional information from other annotations. 6.3 Evaluation by Query Type Table 2 presents a detailed analysis of the performance of the best independent (i-PRF) and joint (jPRF) annotation methods by the three query types used for evaluation: verbal phrases, questions and keyword queries. From the analysis in Table 2, we note that the contribution of joint annotation varies significantly across query types. For instance, using j-PRF always leads to statistically significant improvements over the i-PRF baseline for questions. On the other hand, it is either statistically indistinguishable, or even significantly worse (in the case of capitalization) than the i-PRF baseline for the verbal phrases. Table 2 also demonstrates that joint annotation has a different impact on various annotations for the same query type. For instance, j-PRF has a significant positive effect on capitalization and segmentation for keyword queries, but only marginally improves the POS tagging. Similarly, for the verbal phrases, j-PRF has a significant positive effect only for the segmentation annotation. These variances in the performance of the j-PRF method point to the differences in the structure be108 Annotation Performance by Query Type F1 Verbal Phrases Questions Keyword Queries 60 65 70 75 80 85 90 95 100 CAP SEG TAG Figure 3: Comparative performance (in terms of F1 for capitalization and segmentation and accuracy for POS tagging) of the j-PRF method on the three query types. tween the query types. While dependence between the annotations plays an important role for question and keyword queries, which often share a common grammatical structure, this dependence is less useful for verbal phrases, which have a more diverse linguistic structure. Accordingly, a more in-depth investigation of the linguistic structure of the verbal phrase queries is an interesting direction for future work. 6.4 Annotation Difficulty Recall that in our experiments, out of the overall 250 annotated queries, there are 96 verbal phrases, 93 questions and 61 keyword queries. Figure 3 shows a plot that contrasts the relative performance for these three query types of our best-performing joint annotation method, j-PRF, on capitalization, POS tagging and segmentation annotation tasks. Next, we analyze the performance profiles for the annotation tasks shown in Figure 3. For the capitalization task, the performance of jPRF on verbal phrases and questions is similar, with the difference below 3%. The performance for keyword queries is much higher — with improvement over 20% compared to either of the other two types. We attribute this increase to both a larger number of positive examples in the short keyword queries (a higher percentage of terms in keyword queries is capitalized) and their simpler syntactic structure (adSEG F1 MQA SEG-1 0.768 0.754 SEG-2 0.824∗ 0.787∗ j-PRF 0.819∗(+6.7%/-0.6%) 0.803∗(+6.5%/+2.1%) Table 3: Comparison of the segmentation performance of the j-PRF method to two state-of-the-art segmentation methods. Numbers in parentheses indicate % of improvement over the SEG-1 and SEG-2 baselines respectively. Best result per measure and annotation is boldfaced. ∗ denotes statistically significant differences with SEG-1. jacent terms in these queries are likely to have the same case). For the segmentation task, the performance is at its best for the question and keyword queries, and at its worst (with a drop of 11%) for the verbal phrases. We hypothesize that this is due to the fact that question queries and keyword queries tend to have repetitive structures, while the grammatical structure for verbose queries is much more diverse. For the tagging task, the performance profile is reversed, compared to the other two tasks — the performance is at its worst for keyword queries, since their grammatical structure significantly differs from the grammatical structure of sentences in news articles, on which the POS tagger is trained. For question queries the performance is the best (6% increase over the keyword queries), since they resemble sentences encountered in traditional corpora. It is important to note that the results reported in Figure 3 are based on training the joint annotation model on all available queries with 10-fold crossvalidation. We might get different profiles if a separate annotation model was trained for each query type. In our case, however, the number of queries from each type is not sufficient to train a reliable model. We leave the investigation of separate training of joint annotation models by query type to future work. 6.5 Additional Comparisons In order to further evaluate the proposed joint annotation method, j-PRF, in this section we compare its performance to other query annotation methods previously reported in the literature. Unfortunately, there is not much published work on query capitalization and query POS tagging that goes beyond the simple query-based methods described in Sec109 tion 4.1. The published work on the more advanced methods usually requires access to large amounts of proprietary user data such as query logs and clicks (Barr et al., 2008; Guo et al., 2008; Guo et al., 2009). Therefore, in this section we focus on recent work on query segmentation (Bergsma and Wang, 2007; Hagen et al., 2010). We compare the segmentation effectiveness of our best performing method, j-PRF, to that of these query segmentation methods. The first method, SEG-1, was first proposed by Hagen et al. (2010). It is currently the most effective publicly disclosed unsupervised query segmentation method. SEG-1 method requires an access to a large web n-gram corpus (Brants and Franz, 2006). The optimal segmentation for query Q, S∗ Q, is then obtained using S∗ Q = argmax S∈SQ X s∈S,|s|>1 |s||s|count(s), where SQ is the set of all possible query segmentations, S is a possible segmentation, s is a segment in S, and count(s) is the frequency of s in the web n-gram corpus. The second method, SEG-2, is based on a successful supervised segmentation method, which was first proposed by Bergsma and Wang (2007). SEG-2 employs a large set of features, and is pre-trained on the query collection described by Bergsma and Wang (2007). The features used by the SEG-2 method are described by Bendersky et al. (2009), and include, among others, n-gram frequencies in a sample of a query log, web corpus and Wikipedia titles. Table 3 demonstrates the comparison between the j-PRF, SEG-1 and SEG-2 methods. When compared to the SEG-1 baseline, j-PRF is significantly more effective, even though it only employs bigram counts (see Eq. 4), instead of the high-order n-grams used by SEG-1, for computing the score of a segmentation. This results underscores the benefit of joint annotation, which leverages capitalization and POS tagging to improve the quality of the segmentation. When compared to the SEG-2 baseline, j-PRF and SEG-2 are statistically indistinguishable. SEG-2 posits a slightly better F1, while j-PRF has a better MQA. This result demonstrates that the segmentation produced by the j-PRF method is as effective as the segmentation produced by the current supervised state-of-the-art segmentation methods, which employ external data sources and high-order n-grams. The benefit of the j-PRF method compared to the SEG-2 method, is that, simultaneously with the segmentation, it produces several additional query annotations (in this case, capitalization and POS tagging), eliminating the need to construct separate sequence classifiers for each annotation. 7 Conclusions In this paper, we have investigated a joint approach for annotating search queries with linguistic structures, including capitalization, POS tags and segmentation. To this end, we proposed a probabilistic approach for performing joint query annotation that takes into account the dependencies that exist between the different annotation types. Our experimental findings over a range of queries from a web search log unequivocally point to the superiority of the joint annotation methods over both query-based and pseudo-relevance feedback based independent annotation methods. These findings indicate that the different annotations are mutuallydependent. We are encouraged by the success of our joint query annotation technique, and intend to pursue the investigation of its utility for IR applications. In the future, we intend to research the use of joint query annotations for additional IR tasks, e.g., for constructing better query formulations for ranking algorithms. 8 Acknowledgment This work was supported in part by the Center for Intelligent Information Retrieval and in part by ARRA NSF IIS-9014442. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor. 110 References Niranjan Balasubramanian and James Allan. 2009. Syntactic query models for restatement retrieval. In Proc. of SPIRE, pages 143–155. Cory Barr, Rosie Jones, and Moira Regelson. 2008. The linguistic structure of english web-search queries. In Proc. of EMNLP, pages 1021–1030. Michael Bendersky and W. Bruce Croft. 2009. Analysis of long queries in a large scale search log. In Proc. of Workshop on Web Search Click Data, pages 8–14. Michael Bendersky, David Smith, and W. Bruce Croft. 2009. Two-stage query segmentation for information retrieval. In Proc. of SIGIR, pages 810–811. Michael Bendersky, W. Bruce Croft, and David A. Smith. 2010. Structural annotation of search queries using pseudo-relevance feedback. In Proc. of CIKM, pages 1537–1540. Shane Bergsma and Qin I. Wang. 2007. Learning noun phrase query segmentation. In Proc. of EMNLP, pages 819–826. Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram Version 1. Chris Buckley. 1995. Automatic query expansion using SMART. In Proc. of TREC-3, pages 69–80. Jenny R. Finkel and Christopher D. Manning. 2009. Joint parsing and named entity recognition. In Proc. of NAACL, pages 326–334. Jiafeng Guo, Gu Xu, Hang Li, and Xueqi Cheng. 2008. A unified and discriminative model for query refinement. In Proc. of SIGIR, pages 379–386. Jiafeng Guo, Gu Xu, Xueqi Cheng, and Hang Li. 2009. Named entity recognition in query. In Proc. of SIGIR, pages 267–274. Matthias Hagen, Martin Potthast, Benno Stein, and Christof Braeutigam. 2010. The power of naive query segmentation. In Proc. of SIGIR, pages 797–798. Matthias Hagen, Martin Potthast, Benno Stein, and Christof Br¨autigam. 2011. Query segmentation revisited. In Proc. of WWW, pages 97–106. Rosie Jones and Daniel C. Fain. 2003. Query word deletion prediction. In Proc. of SIGIR, pages 435–436. Rosie Jones, Benjamin Rey, Omid Madani, and Wiley Greiner. 2006. Generating query substitutions. In Proc. of WWW, pages 387–396. Giridhar Kumaran and James Allan. 2007. A case for shorter queries, and helping user create them. In Proc. of NAACL, pages 220–227. Giridhar Kumaran and Vitor R. Carvalho. 2009. Reducing long queries using query quality predictors. In Proc. of SIGIR, pages 564–571. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of ICML, pages 282–289. Victor Lavrenko and W. Bruce Croft. 2001. Relevance based language models. In Proc. of SIGIR, pages 120– 127. Matthew Lease. 2007. Natural language processing for information retrieval: the time is ripe (again). In Proceedings of PIKM. Xiao Li. 2010. Understanding the semantic structure of noun phrase queries. In Proc. of ACL, pages 1337– 1345, Morristown, NJ, USA. Rachel T. Lo, Ben He, and Iadh Ounis. 2005. Automatically building a stopword list for an information retrieval system. In Proc. of DIR. Yumao Lu, Fuchun Peng, Gilad Mishne, Xing Wei, and Benoit Dumoulin. 2009. Improving Web search relevance with semantic features. In Proc. of EMNLP, pages 648–657. Mehdi Manshadi and Xiao Li. 2009. Semantic Tagging of Web Search Queries. In Proc. of ACL, pages 861– 869. Andr´e F. T. Martins, Dipanjan Das, Noah A. Smith, and Eric P. Xing. 2008. Stacking dependency parsers. In Proc. of EMNLP, pages 157–166. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proc. of ACL, pages 950–958. Marius Pas¸ca. 2007. Weakly-supervised discovery of named entities using web search queries. In Proc. of CIKM, pages 683–690. Dou Shen, Toby Walkery, Zijian Zhengy, Qiang Yangz, and Ying Li. 2008. Personal name classification in web queries. In Proc. of WSDM, pages 149–158. Bin Tan and Fuchun Peng. 2008. Unsupervised query segmentation using generative language models and Wikipedia. In Proc. of WWW, pages 347–356. Kristina Toutanova, Aria Haghighi, and Christopher D. Manning. 2008. A global joint model for semantic role labeling. Computational Linguistics, 34:161–191, June. Xing Wei, Fuchun Peng, and Benoit Dumoulin. 2008. Analyzing web text association to disambiguate abbreviation in queries. In Proc. of SIGIR, pages 751–752. 111
2011
11
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1098–1108, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning From Collective Human Behavior to Introduce Diversity in Lexical Choice Vahed Qazvinian Department of EECS University of Michigan Ann Arbor, MI [email protected] Dragomir R. Radev School of Information Department of EECS University of Michigan Ann Arbor, MI [email protected] Abstract We analyze collective discourse, a collective human behavior in content generation, and show that it exhibits diversity, a property of general collective systems. Using extensive analysis, we propose a novel paradigm for designing summary generation systems that reflect the diversity of perspectives seen in reallife collective summarization. We analyze 50 sets of summaries written by human about the same story or artifact and investigate the diversity of perspectives across these summaries. We show how different summaries use various phrasal information units (i.e., nuggets) to express the same atomic semantic units, called factoids. Finally, we present a ranker that employs distributional similarities to build a network of words, and captures the diversity of perspectives by detecting communities in this network. Our experiments show how our system outperforms a wide range of other document ranking systems that leverage diversity. 1 Introduction In sociology, the term collective behavior is used to denote mass activities that are not centrally coordinated (Blumer, 1951). Collective behavior is different from group behavior in the following ways: (a) it involves limited social interaction, (b) membership is fluid, and (c) it generates weak and unconventional norms (Smelser, 1963). In this paper, we focus on the computational analysis of collective discourse, a collective behavior seen in interactive content contribution and text summarization in online social media. In collective discourse each individual’s behavior is largely independent of that of other individuals. In social media, discourse (Grosz and Sidner, 1986) is often a collective reaction to an event. One scenario leading to collective reaction to a welldefined subject is when an event occurs (a movie is released, a story occurs, a paper is published) and people independently write about it (movie reviews, news headlines, citation sentences). This process of content generation happens over time, and each person chooses the aspects to cover. Each event has an onset and a time of death after which nothing is written about it. Tracing the generation of content over many instances will reveal temporal patterns that will allow us to make sense of the text generated around a particular event. To understand collective discourse, we are interested in behavior that happens over a short period of time. We focus on topics that are relatively welldefined in scope such as a particular event or a single news event that does not evolve over time. This can eventually be extended to events and issues that are evolving either in time or scope such as elections, wars, or the economy. In social sciences and the study of complex systems a lot of work has been done to study such collective systems, and their properties such as selforganization (Page, 2007) and diversity (Hong and Page, 2009; Fisher, 2009). However, there is little work that studies a collective system in which members individually write summaries. In most of this paper, we will be concerned with developing a complex systems view of the set of collectively written summaries, and give evidence of 1098 the diversity of perspectives and its cause. We believe that out experiments will give insight into new models of text generation, which is aimed at modeling the process of producing natural language texts, and is best characterized as the process of making choices between alternate linguistic realizations, also known as lexical choice (Elhadad, 1995; Barzilay and Lee, 2002; Stede, 1995). 2 Prior Work In summarization, a number of previous methods have focused on diversity. (Mei et al., 2010) introduce a diversity-focused ranking methodology based on reinforced random walks in information networks. Their random walk model introduces the rich-gets-richer mechanism to PageRank with reinforcements on transition probabilities between vertices. A similar ranking model is the Grasshopper ranking model (Zhu et al., 2007), which leverages an absorbing random walk. This model starts with a regular time-homogeneous random walk, and in each step the node with the highest weight is set as an absorbing state. The multi-view point summarization of opinionated text is discussed in (Paul et al., 2010). Paul et al. introduce Comparative LexRank, based on the LexRank ranking model (Erkan and Radev, 2004). Their random walk formulation is to score sentences and pairs of sentences from opposite viewpoints (clusters) based on both their representativeness of the collection as well as their contrastiveness with each other. Once a lexical similarity graph is built, they modify the graph based on cluster information and perform LexRank on the modified cosine similarity graph. The most well-known paper that address diversity in summarization is (Carbonell and Goldstein, 1998), which introduces Maximal Marginal Relevance (MMR). This method is based on a greedy algorithm that picks sentences in each step that are the least similar to the summary so far. There are a few other diversity-focused summarization systems like C-LexRank (Qazvinian and Radev, 2008), which employs document clustering. These papers try to increase diversity in summarizing documents, but do not explain the type of the diversity in their inputs. In this paper, we give an insightful discussion on the nature of the diversity seen in collective discourse, and will explain why some of the mentioned methods may not work under such environments. In prior work on evaluating independent contributions in content generation, Voorhees (Voorhees, 1998) studied IR systems and showed that relevance judgments differ significantly between humans but relative rankings show high degrees of stability across annotators. However, perhaps the closest work to this paper is (van Halteren and Teufel, 2004) in which 40 Dutch students and 10 NLP researchers were asked to summarize a BBC news report, resulting in 50 different summaries. Teufel and van Halteren also used 6 DUC1-provided summaries, and annotations from 10 student participants and 4 additional researchers, to create 20 summaries for another news article in the DUC datasets. They calculated the Kappa statistic (Carletta, 1996; Krippendorff, 1980) and observed high agreement, indicating that the task of atomic semantic unit (factoid) extraction can be robustly performed in naturally occurring text, without any copy-editing. The diversity of perspectives and the unprecedented growth of the factoid inventory also affects evaluation in text summarization. Evaluation methods are either extrinsic, in which the summaries are evaluated based on their quality in performing a specific task (Sp¨arck-Jones, 1999) or intrinsic where the quality of the summary itself is evaluated, regardless of any applied task (van Halteren and Teufel, 2003; Nenkova and Passonneau, 2004). These evaluation methods assess the information content in the summaries that are generated automatically. Finally, recent research on analyzing online social media shown a growing interest in mining news stories and headlines because of its broad applications ranging from “meme” tracking and spike detection (Leskovec et al., 2009) to text summarization (Barzilay and McKeown, 2005). In similar work on blogs, it is shown that detecting topics (Kumar et al., 2003; Adar et al., 2007) and sentiment (Pang and Lee, 2004) in the blogosphere can help identify influential bloggers (Adar et al., 2004; Java et al., 2006) and mine opinions about products (Mishne and Glance, 2006). 1Document Understanding Conference 1099 3 Data Annotation The datasets used in our experiments represent two completely different categories: news headlines, and scientific citation sentences. The headlines datasets consist of 25 clusters of news headlines collected from Google News2, and the citations datasets have 25 clusters of citations to specific scientific papers from the ACL Anthology Network (AAN)3. Each cluster consists of a number of unique summaries (headlines or citations) about the same artifact (nonevolving news story or scientific paper) written by different people. Table 1 lists some of the clusters with the number of summaries in them. ID type Name Story/Title # 1 hdl miss Miss Venezuela wins miss universe’09 125 2 hdl typhoon Second typhoon hit philippines 100 3 hdl russian Accident at Russian hydro-plant 101 4 hdl redsox Boston Red Sox win world series 99 5 hdl gervais “Invention of Lying” movie reviewed 97 · · · · · · · · · 25 hdl yale Yale lab tech in court 10 26 cit N03-1017 Statistical Phrase-Based Translation 172 27 cit P02-1006 Learning Surface Text Patterns ... 72 28 cit P05-1012 On-line Large-Margin Training ... 71 29 cit C96-1058 Three New Probabilistic Models ... 66 30 cit P05-1033 A Hierarchical Phrase-Based Model ... 65 · · · · · · · · · 50 cit H05-1047 A Semantic Approach to Recognizing ... 7 Table 1: Some of the annotated datasets and the number of summaries in each of them (hdl = headlines; cit = citations) 3.1 Nuggets vs. Factoids We define an annotation task that requires explicit definitions that distinguish between phrases that represent the same or different information units. Unfortunately, there is little consensus in the literature on such definitions. Therefore, we follow (van Halteren and Teufel, 2003) and make the following distinction. We define a nugget to be a phrasal information unit. Different nuggets may all represent the same atomic semantic unit, which we call as a factoid. In the following headlines, which are randomly extracted from the redsox dataset, nuggets are manually underlined. red sox win 2007 world series boston red sox blank rockies to clinch world series 2news.google.com 3http://clair.si.umich.edu/clair/anthology/ boston fans celebrate world series win; 37 arrests reported These 3 headlines contain 9 nuggets, which represent 5 factoids or classes of equivalent nuggets. f1 : {red sox, boston, boston red sox} f2 : {2007 world series, world series win, world series} f3 : {rockies} f4 : {37 arrests} f5 : {fans celebrate} This example suggests that different headlines on the same story written independently of one another use different phrases (nuggets) to refer to the same semantic unit (e.g., “red sox” vs. “boston” vs. “boston red sox”) or to semantic units corresponding to different aspects of the story (e.g., “37 arrests” vs. “rockies”). In the former case different nuggets are used to represent the same factoid, while in the latter case different nuggets are used to express different factoids. This analogy is similar to the definition of factoids in (van Halteren and Teufel, 2004). The following citation sentences to Koehn’s work suggest that a similar phenomenon also happens in citations. We also compared our model with pharaoh (Koehn et al, 2003). Koehn et al (2003) find that phrases longer than three words improve performance little. Koehn et al (2003) suggest limiting phrase length to three words or less. For further information on these parameter settings, confer (koehn et al, 2003). where the first author mentions “pharaoh” as a contribution of Koehn et al, but the second and third use different nuggets to represent the same contribution: use of trigrams. However, as the last citation shows, a citation sentence, unlike news headlines, may cover no information about the target paper. The use of phrasal information as nuggets is an essential element to our experiments, since some headline writers often try to use uncommon terms to refer to a factoid. For instance, two headlines from the redsox cluster are: Short wait for bossox this time Soxcess started upstairs 1100 Following these examples, we asked two annotators to annotate all 1, 390 headlines, and 926 citations. The annotators were asked to follow precise guidelines in nugget extraction. Our guidelines instructed annotators to extract non-overlapping phrases from each headline as nuggets. Therefore, each nugget should be a substring of the headline that represents a semantic unit4. Previously (Lin and Hovy, 2002) had shown that information overlap judgment is a difficult task for human annotators. To avoid such a difficulty, we enforced our annotators to extract non-overlapping nuggets from a summary to make sure that they are mutually independent and that information overlap between them is minimized. Finding agreement between annotated welldefined nuggets is straightforward and can be calculated in terms of Kappa. However, when nuggets themselves are to be extracted by annotators, the task becomes less obvious. To calculate the agreement, we annotated 10 randomly selected headline clusters twice and designed a simple evaluation scheme based on Kappa5. For each n-gram, w, in a given headline, we look if w is part of any nugget in either human annotations. If w occurs in both or neither, then the two annotators agree on it, and otherwise they do not. Based on this agreement setup, we can formalize the κ statistic as κ = Pr(a)−Pr(e) 1−Pr(e) where Pr(a) is the relative observed agreement among annotators, and Pr(e) is the probability that annotators agree by chance if each annotator is randomly assigning categories. Table 2 shows the unigram, bigram, and trigrambased average κ between the two human annotators (Human1, Human2). These results suggest that human annotators can reach substantial agreement when bigram and trigram nuggets are examined, and has reasonable agreement for unigram nuggets. 4 Diversity We study the diversity of ways with which human summarizers talk about the same story or event and explain why such a diversity exists. 4Before the annotations, we lower-cased all summaries and removed duplicates 5Previously (Qazvinian and Radev, 2010) have shown high agreement in human judgments in a similar task on citation annotation Average κ unigram bigram trigram Human1 vs. Human2 0.76 ± 0.4 0.80 ± 0.4 0.89 ± 0.3 Table 2: Agreement between different annotators in terms of average Kappa in 25 headline clusters. 10 0 10 1 10 2 10 −2 10 −1 10 0 Pr(X ≥ c) c headlines Pr(X ≥ c) 10 0 10 1 10 2 10 −2 10 −1 10 0 Pr(X ≥ c) c citations Pr(X ≥ c) Figure 1: The cumulative probability distribution for the frequency of factoids (i.e., the probability that a factoid will be mentioned in c different summaries) across in each category. 4.1 Skewed Distributions Our first experiment is to analyze the popularity of different factoids. For each factoid in the annotated clusters, we extract its count, X, which is equal to the number of summaries it has been mentioned in, and then we look at the distribution of X. Figure 1 shows the cumulative probability distribution for these counts (i.e., the probability that a factoid will be mentioned in at least c different summaries) in both categories. These highly skewed distributions indicate that a large number of factoids (more than 28%) are only mentioned once across different clusters (e.g., “poor pitching of colorado” in the redsox cluster), and that a few factoids are mentioned in a large number of headlines (likely using different nuggets). The large number of factoids that are only mentioned in one headline indicates that different summarizers increase diversity by focusing on different aspects of a story or a paper. The set of nuggets also exhibit similar skewed distributions. If we look at individual nuggets, the redsox set shows that about 63 (or 80%) of the nuggets get mentioned in only one headline, resulting in a right-skewed distribution. The factoid analysis of the datasets reveals two main causes for the content diversity seen in headlines: (1) writers focus on different aspects of the story and therefore write about different factoids 1101 (e.g., “celebrations” vs. “poor pitching of colorado”). (2) writer use different nuggets to represent the same factoid (e.g., “redsox” vs. “bosox”). In the following sections we analyze the extent at which each scenario happens. 10 0 10 1 10 2 10 3 0 200 400 600 800 1000 number of summaries Inventory size headlines Nuggets Factoids 10 0 10 1 10 2 10 3 0 50 100 150 200 250 300 350 number of summaries Inventory size citations Nuggets Factoids Figure 2: The number of unique factoids and nuggets observed by reading n random summaries in all the clusters of each category 4.2 Factoid Inventory The emergence of diversity in covering different factoids suggests that looking at more summaries will capture a larger number of factoids. In order to analyze the growth of the factoid inventory, we perform a simple experiment. We shuffle the set of summaries from all 25 clusters in each category, and then look at the number of unique factoids and nuggets seen after reading nth summary. This number shows the amount of information that a randomly selected subset of n writers represent. This is important to study in order to find out whether we need a large number of summaries to capture all aspects of a story and build a complete factoid inventory. The plot in Figure 4.1 shows, at each n, the number of unique factoids and nuggets observed by reading n random summaries from the 25 clusters in each category. These curves are plotted on a semi-log scale to emphasize the difference between the growth patterns of the nugget inventories and the factoid inventories6. This finding numerically confirms a similar observation on human summary annotations discussed in (van Halteren and Teufel, 2003; van Halteren and Teufel, 2004). In their work, van Halteren and Teufel indicated that more than 10-20 human summaries are needed for a full factoid inventory. However, our experiments with nuggets of nearly 2, 400 independent human summaries suggest that neither the nugget inventory nor the number of factoids will be likely to show asymptotic behavior. However, these plots show that the nugget inventory grows at a much faster rate than factoids. This means that a lot of the diversity seen in human summarization is a result of the so called different lexical choices that represent the same semantic units or factoids. 4.3 Summary Quality In previous sections we gave evidence for the diversity seen in human summaries. However, a more important question to answer is whether these summaries all cover important aspects of the story. Here, we examine the quality of these summaries, study the distribution of information coverage in them, and investigate the number of summaries required to build a complete factoid inventory. The information covered in each summary can be determined by the set of factoids (and not nuggets) and their frequencies across the datasets. For example, in the redsox dataset, “red sox”, “boston”, and “boston red sox” are nuggets that all represent the same piece of information: the red sox team. Therefore, different summaries that use these nuggets to refer to the red sox team should not be seen as very different. We use the Pyramid model (Nenkova and Passonneau, 2004) to value different summary factoids. Intuitively, factoids that are mentioned more frequently are more salient aspects of the story. Therefore, our pyramid model uses the normalized frequency at which a factoid is mentioned across a dataset as its weight. In the pyramid model, the individual factoids fall in tiers. If a factoid appears in more summaries, it falls in a higher tier. In principle, if the term wi appears |wi| times in the set of 6Similar experiment using individual clusters exhibit similar behavior 1102 headlines it is assigned to the tier T|wi|. The pyramid score that we use is computed as follows. Suppose the pyramid has n tiers, Ti, where tier Tn is the top tier and T1 is the bottom. The weight of the factoids in tier Ti will be i (i.e. they appeared in i summaries). If |Ti| denotes the number of factoids in tier Ti, and Di is the number of factoids in the summary that appear in Ti, then the total factoid weight for the summary is D = Pn i=1 i × Di. Additionally, the optimal pyramid score for a summary is Max = Pn i=1 i × |Ti|. Finally, the pyramid score for a summary can be calculated as P = D Max Based on this scoring scheme, we can use the annotated datasets to determine the quality of individual headlines. First, for each set we look at the variation in pyramid scores that individual summaries obtain in their set. Figure 3 shows, for each cluster, the variation in the pyramid scores (25th to 75th percentile range) of individual summaries evaluated against the factoids of that cluster. This figure indicates that the pyramid score of almost all summaries obtain values with high variations in most of the clusters For instance, individual headlines from redsox obtain pyramid scores as low as 0.00 and as high as 0.93. This high variation confirms the previous observations on diversity of information coverage in different summaries. Additionally, this figure shows that headlines generally obtain higher values than citations when considered as summaries. One reason, as explained before, is that a citation may not cover any important contribution of the paper it is citing, when headlines generally tend to cover some aspects of the story. High variation in quality means that in order to capture a larger information content we need to read a greater number of summaries. But how many headlines should one read to capture a desired level of information content? To answer this question, we perform an experiment based on drawing random summaries from the pool of all the clusters in each category. We perform a Monte Carlo simulation, in which for each n, we draw n random summaries, and look at the pyramid score achieved by reading these headlines. The pyramid score is calculated using the factoids from all 25 clusters in each category7. Each experiment is repeated 1, 000 times to find the statistical significance of the experiment and the variation from the average pyramid scores. Figure 4.3 shows the average pyramid scores over different n values in each category on a log-log scale. This figure shows how pyramid score grows and approaches 1.00 rapidly as more randomly selected summaries are seen. 10 0 10 1 10 2 10 3 10 −2 10 −1 10 0 number of summaries Pyramid Score headlines citations Figure 4: Average pyramid score obtained by reading n random summaries shows rapid asymptotic behavior. 5 Diversity-based Ranking In previous sections we showed that the diversity seen in human summaries could be according to different nuggets or phrases that represent the same factoid. Ideally, a summarizer that seeks to increase diversity should capture this phenomenon and avoid covering redundant nuggets. In this section, we use different state of the art summarization systems to rank the set of summaries in each cluster with respect to information content and diversity. To evaluate each system, we cut the ranked list at a constant length (in terms of the number of words) and calculate the pyramid score of the remaining text. 5.1 Distributional Similarity We have designed a summary ranker that will produce a ranked list of documents with respect to the diversity of their contents. Our model works based on ranking individual words and using the ranked list of words to rank documents that contain them. In order to capture the nuggets of equivalent semantic classes, we use a distributional similarity of 7Similar experiment using individual clusters exhibit similar results 1103 0 0.2 0.4 0.6 0.8 1 abortion amazon babies burger colombia england gervais google ireland maine mercury miss monkey mozart nobel priest ps3slim radiation redsox russian scientist soupy sweden typhoon yale A00_1023 A00_1043 A00_2024 C00_1072 C96_1058 D03_1017 D04_9907 H05_1047 H05_1079 J04_4002 N03_1017 N04_1033 P02_1006 P03_1001 P05_1012 P05_1013 P05_1014 P05_1033 P97_1003 P99_1065 W00_0403 W00_0603 W03_0301 W03_0510 W05_1203 Pyramid Score headlines citations Figure 3: The 25th to 75th percentile pyramid score range in individual clusters words that is inspired by (Lee, 1999). We represent each word by its context in the cluster and find the similarity of such contexts. Particularly, each word wi is represented by a bag of words, ℓi, that have a surface distance of 3 or smaller to wi anywhere in the cluster. In other words, ℓi contains any word that co-occurs with wi in a 4-gram in the cluster. This bag of words representation of words enables us to find the word-pair similarities. sim(wi, wj) = ⃗ℓi · ⃗ℓj q |⃗ℓi||⃗ℓj| (1) We use the pair-wise similarities of words in each cluster, and build a network of words and their similarities. Intuitively, words that appear in similar contexts are more similar to each other and will have a stronger edge between them in the network. Therefore, similar words, or words that appear in similar contexts, will form communities in this graph. Ideally, each community in the word similarity network would represent a factoid. To find the communities in the word network we use (Clauset et al., 2004), a hierarchical agglomeration algorithm which works by greedily optimizing the modularity in a linear running time for sparse graphs. The community detection algorithm will assign to each word wi, a community label Ci. For each community, we use LexRank to rank the words using the similarities in Equation 1, and assign a score to each word wi as S(wi) = Ri |Ci|, where Ri is the rank of wi in its community, and |Ci| is the number of words that belong to Ci. Figure 5.1 shows part police second sox celebrations red jump baseball unhappy sweeps pitching hitting arrest victory title dynasty fan poorer 2nd poor glory Pajek Figure 5: Part of the word similarity graph in the redsox cluster of the word similarity graph in the redsox cluster, in which each node is color-coded with its community. This figure illustrates how words that are semantically related to the same aspects of the story fall in the same communities (e.g., “police” and “arrest”). Finally, to rank sentences, we define the score of each document Dj as the sum of the scores of its words. pds(Dj) = X wi∈Dj S(wi) Intuitively, sentences that contain higher ranked words in highly populated communities will have a smaller score. To rank the sentences, we sort them in an ascending order, and cut the list when its size is greater than the length limit. 5.2 Other Methods 5.2.1 Random For each cluster in each category (citations and headlines), this method simply gets a random per1104 mutations of the summaries. In the headlines datasets, where most of the headlines cover some factoids about the story, we expect this method to perform reasonably well since randomization will increase the chances of covering headlines that focus on different factoids. However, in the citations dataset, where a citing sentence may cover no information about the cited paper, randomization has the drawback of selecting citations that have no valuable information in them. 5.2.2 LexRank LexRank (Erkan and Radev, 2004) works by first building a graph of all the documents (Di) in a cluster. The edges between corresponding nodes (di) represent the cosine similarity between them is above a threshold (0.10 following (Erkan and Radev, 2004)). Once the network is built, the system finds the most central sentences by performing a random walk on the graph. p(dj) = (1 −λ) 1 |D| + λ X di p(di)P(di →dj) (2) 5.2.3 MMR Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) uses the pairwise cosine similarity matrix and greedily chooses sentences that are the least similar to those already in the summary. In particular, MMR = arg minDi∈D−A h maxDj∈A Sim(Di, Dj) i where A is the set of documents in the summary, initialized to A = ∅. 5.2.4 DivRank Unlike other time-homogeneous random walks (e.g., PageRank), DivRank does not assume that the transition probabilities remain constant over time. DivRank uses a vertex-reinforced random walk model to rank graph nodes based on a diversity based centrality. The basic assumption in DivRank is that the transition probability from a node to other is reinforced by the number of previous visits to the target node (Mei et al., 2010). Particularly, let’s assume pT (u, v) is the transition probability from any node u to node v at time T. Then, pT (di, dj) = (1 −λ).p∗(dj) + λ.p0(di, dj).NT (dj) DT (di) (3) where NT (dj) is the number of times the walk has visited dj up to time T and DT (di) = X dj∈V p0(di, dj)NT (dj) (4) Here, p∗(dj) is the prior distribution that determines the preference of visiting vertex dj. We try two variants of this algorithm: DivRank, in which p∗(dj) is uniform, and DivRank with priors in which p∗(dj) ∝l(Dj)−β, where l(Dj) is the number of the words in the document Dj and β is a parameter (β = 0.8). 5.2.5 C-LexRank C-LexRank is a clustering-based model in which the cosine similarities of document pairs are used to build a network of documents. Then the the network is split into communities, and the most salient documents in each community are selected (Qazvinian and Radev, 2008). C-LexRank focuses on finding communities of documents using their cosine similarity. The intuition is that documents that are more similar to each other contain similar factoids. We expect C-LexRank to be a strong ranker, but incapable of capturing the diversity caused by using different phrases to express the same meaning. The reason is that different nuggets that represent the same factoid often have no words in common (e.g., “victory” and “glory”) and won’t be captured by a lexical measure like cosine similarity. 5.3 Experiments We use each of the systems explained above to rank the summaries in each cluster. Each ranked list is then cut at a certain length (50 words for headlines, and 150 for citations) and the information content in the remaining text is examined using the pyramid score. Table 3 shows the average pyramid score achieved by different methods in each category. The method based on the distributional similarities of words outperforms other methods in the citations category. All methods show similar results in the headlines category, where most headlines cover at least 1 factoid about the story and a random ranker performs reasonably well. Table 4 shows top 3 headlines from 3 rankers: word distributional similarity (WDS), CLexRank, and MMR. In this example, the first 3 1105 Method headlines citations Mean pyramid 95% C.I. pyramid 95% C.I. R 0.928 [0.896, 0.959] 0.716 [0.625, 0.807] 0.822 MMR 0.930 [0.902, 0.960] 0.766 [0.684, 0.847] 0.848 LR 0.918 [0.891, 0.945] 0.728 [0.635, 0.822] 0.823 DR 0.927 [0.900, 0.955] 0.736 [0.667, 0.804] 0.832 DR(p) 0.916 [0.884, 0.949] 0.764 [0.697, 0.831] 0.840 C-LR 0.942 [0.919, 0.965] 0.781 [0.710, 0.852] 0.862 WDS 0.931 [0.905, 0.958] 0.813 [0.738, 0.887] 0.872 R=Random; LR=LexRank; DR=DivRank; DR(p)=DivRank with Priors; CLR=C-LexRank; WDS=Word Distributional Similarity; C.I.=Confidence Interval Table 3: Comparison of different ranking systems Method Top 3 headlines WDS 1: how sweep it is 2: fans celebrate red sox win 3: red sox take title C-LR 1: world series: red sox sweep rockies 2: red sox take world series 3: red sox win world series MMR 1:red sox scale the rockies 2: boston sweep colorado to win world series 3: rookies respond in first crack at the big time C-LR=C-LexRank; WDS=Word Distributional Similarity Table 4: Top 3 ranked summaries of the redsox cluster using different methods headlines produced by WDS cover two important factoids: “red sox winning the title” and “fans celebrating”. However, the second factoid is absent in the other two. 6 Conclusion and Future Work Our experiments on two different categories of human-written summaries (headlines and citations) showed that a lot of the diversity seen in human summarization comes from different nuggets that may actually represent the same semantic information (i.e., factoids). We showed that the factoids exhibit a skewed distribution model, and that the size of the nugget inventory asymptotic behavior even with a large number of summaries. We also showed high variation in summary quality across different summaries in terms of pyramid score, and that the information covered by reading n summaries has a rapidly growing asymptotic behavior as n increases. Finally, we proposed a ranking system that employs word distributional similarities to identify semantically equivalent words, and compared it with a wide range of summarization systems that leverage diversity. In the future, we plan to move to content from other collective systems on Web. In order to generalize our findings, we plan to examine blog comments, online reviews, and tweets (that discuss the same URL). We also plan to build a generation system that employs the Yule model (Yule, 1925) to determine the importance of each aspect (e.g. who, when, where, etc.) in order to produce summaries that include diverse aspects of a story. Our work has resulted in a publicly available dataset 8 of 25 annotated news clusters with nearly 1, 400 headlines, and 25 clusters of citation sentences with more than 900 citations. We believe that this dataset can open new dimensions in studying diversity and other aspects of automatic text generation. 7 Acknowledgments This work is supported by the National Science Foundation grant number IIS-0705832 and grant number IIS-0968489. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the supporters. References Eytan Adar, Li Zhang, Lada A. Adamic, and Rajan M. Lukose. 2004. Implicit structure and the dynamics of 8http://www-personal.umich.edu/˜vahed/ data.html 1106 Blogspace. In WWW’04, Workshop on the Weblogging Ecosystem. Eytan Adar, Daniel S. Weld, Brian N. Bershad, and Steven S. Gribble. 2007. Why we search: visualizing and predicting user behavior. In WWW’07, pages 161–170, New York, NY, USA. Regina Barzilay and Lillian Lee. 2002. Bootstrapping lexical choice via multiple-sequence alignment. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10, EMNLP ’02, pages 164–171. Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summarization. Comput. Linguist., 31(3):297–328. Herbert Blumer. 1951. Collective behavior. In Lee, Alfred McClung, Ed., Principles of Sociology. Jaime G. Carbonell and Jade Goldstein. 1998. The use of MMR, diversity-based reranking for reordering documents and producing summaries. In SIGIR’98, pages 335–336. Jean Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Comput. Linguist., 22(2):249–254. Aaron Clauset, Mark E. J. Newman, and Cristopher Moore. 2004. Finding community structure in very large networks. Phys. Rev. E, 70(6). Michael Elhadad. 1995. Using argumentation in text generation. Journal of Pragmatics, 24:189–220. G¨unes¸ Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based centrality as salience in text summarization. Journal of Artificial Intelligence Research (JAIR). Len Fisher. 2009. The Perfect Swarm: The Science of Complexity in Everyday Life. Basic Books. Barbara J. Grosz and Candace L. Sidner. 1986. Attention, intentions, and the structure of discourse. Comput. Linguist., 12:175–204, July. Lu Hong and Scott Page. 2009. Interpreted and generated signals. Journal of Economic Theory, 144(5):2174–2196. Akshay Java, Pranam Kolari, Tim Finin, and Tim Oates. 2006. Modeling the spread of influence on the blogosphere. In WWW’06. Klaus Krippendorff. 1980. Content Analysis: An Introduction to its Methodology. Beverly Hills: Sage Publications. Ravi Kumar, Jasmine Novak, Prabhakar Raghavan, and Andrew Tomkins. 2003. On the bursty evolution of blogspace. In WWW’03, pages 568–576, New York, NY, USA. Lillian Lee. 1999. Measures of distributional similarity. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 25–32. Jure Leskovec, Lars Backstrom, and Jon Kleinberg. 2009. Meme-tracking and the dynamics of the news cycle. In KDD ’09: Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 497–506. Chin-Yew Lin and Eduard Hovy. 2002. Manual and automatic evaluation of summaries. In ACL-Workshop on Automatic Summarization. Qiaozhu Mei, Jian Guo, and Dragomir Radev. 2010. Divrank: the interplay of prestige and diversity in information networks. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1009–1018. Gilad Mishne and Natalie Glance. 2006. Predicting movie sales from blogger sentiment. In AAAI 2006 Spring Symposium on Computational Approaches to Analysing Weblogs (AAAI-CAAW 2006). Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. Proceedings of the HLT-NAACL conference. Scott E. Page. 2007. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press. Bo Pang and Lillian Lee. 2004. A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In ACL’04, Morristown, NJ, USA. Michael Paul, ChengXiang Zhai, and Roxana Girju. 2010. Summarizing contrastive viewpoints in opinionated text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 66–76. Vahed Qazvinian and Dragomir R. Radev. 2008. Scientific paper summarization using citation summary networks. In COLING 2008, Manchester, UK. Vahed Qazvinian and Dragomir R. Radev. 2010. Identifying non-explicit citing sentences for citation-based summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 555–564, Uppsala, Sweden, July. Association for Computational Linguistics. Neil J. Smelser. 1963. Theory of Collective Behavior. Free Press. Karen Sp¨arck-Jones. 1999. Automatic summarizing: factors and directions. In Inderjeet Mani and Mark T. Maybury, editors, Advances in automatic text summarization, chapter 1, pages 1 – 12. The MIT Press. Manfred Stede. 1995. Lexicalization in natural language generation: a survey. Artificial Intelligence Review, (8):309–336. Hans van Halteren and Simone Teufel. 2003. Examining the consensus between human summaries: initial experiments with factoid analysis. In Proceedings of 1107 the HLT-NAACL 03 on Text summarization workshop, pages 57–64, Morristown, NJ, USA. Association for Computational Linguistics. Hans van Halteren and Simone Teufel. 2004. Evaluating information content by factoid analysis: human annotation and stability. In EMNLP’04, Barcelona. Ellen M. Voorhees. 1998. Variations in relevance judgments and the measurement of retrieval effectiveness. In SIGIR ’98: Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 315–323. G. Udny Yule. 1925. A mathematical theory of evolution, based on the conclusions of dr. j. c. willis, f.r.s. Philosophical Transactions of the Royal Society of London. Series B, Containing Papers of a Biological Character, 213:21–87. Xiaojin Zhu, Andrew Goldberg, Jurgen Van Gael, and David Andrzejewski. 2007. Improving diversity in ranking using absorbing random walks. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 97–104. 1108
2011
110
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1109–1116, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Ordering Prenominal Modifiers with a Reranking Approach Jenny Liu MIT CSAIL [email protected] Aria Haghighi MIT CSAIL [email protected] Abstract In this work, we present a novel approach to the generation task of ordering prenominal modifiers. We take a maximum entropy reranking approach to the problem which admits arbitrary features on a permutation of modifiers, exploiting hundreds of thousands of features in total. We compare our error rates to the state-of-the-art and to a strong Google ngram count baseline. We attain a maximum error reduction of 69.8% and average error reduction across all test sets of 59.1% compared to the state-of-the-art and a maximum error reduction of 68.4% and average error reduction across all test sets of 41.8% compared to our Google n-gram count baseline. 1 Introduction Speakers rarely have difficulty correctly ordering modifiers such as adjectives, adverbs, or gerunds when describing some noun. The phrase “beautiful blue Macedonian vase” sounds very natural, whereas changing the modifier ordering to “blue Macedonian beautiful vase” is awkward (see Table 1 for more examples). In this work, we consider the task of ordering an unordered set of prenominal modifiers so that they sound fluent to native language speakers. This is an important task for natural language generation systems. Much linguistic research has investigated the semantic constraints behind prenominal modifier orderings. One common line of research suggests that modifiers can be organized by the underlying semantic property they describe and that there is a. the vegetarian French lawyer b. the French vegetarian lawyer a. the beautiful small black purse b. the beautiful black small purse c. the small beautiful black purse d. the small black beautiful purse Table 1: Examples of restrictions on modifier orderings from Teodorescu (2006). The most natural sounding ordering is in bold, followed by other possibilities that may only be appropriate in certain situations. an ordering on semantic properties which in turn restricts modifier orderings. For instance, Sproat and Shih (1991) contend that the size property precedes the color property and thus “small black cat” sounds more fluent than “black small cat”. Using > to denote precedence of semantic groups, some commonly proposed orderings are: quality > size > shape > color > provenance (Sproat and Shih, 1991), age > color > participle > provenance > noun > denominal (Quirk et al., 1974), and value > dimension > physical property > speed > human propensity > age > color (Dixon, 1977). However, correctly classifying modifiers into these groups can be difficult and may be domain dependent or constrained by the context in which the modifier is being used. In addition, these methods do not specify how to order modifiers within the same class or modifiers that do not fit into any of the specified groups. There have also been a variety of corpus-based, computational approaches. Mitchell (2009) uses 1109 a class-based approach in which modifiers are grouped into classes based on which positions they prefer in the training corpus, with a predefined ordering imposed on these classes. Shaw and Hatzivassiloglou (1999) developed three different approaches to the problem that use counting methods and clustering algorithms, and Malouf (2000) expands upon Shaw and Hatzivassiloglou’s work. This paper describes a computational solution to the problem that uses relevant features to model the modifier ordering process. By mapping a set of features across the training data and using a maximum entropy reranking model, we can learn optimal weights for these features and then order each set of modifiers in the test data according to our features and the learned weights. This approach has not been used before to solve the prenominal modifier ordering problem, and as we demonstrate, vastly outperforms the state-of-the-art, especially for sequences of longer lengths. Section 2 of this paper describes previous computational approaches. In Section 3 we present the details of our maximum entropy reranking approach. Section 4 covers the evaluation methods we used, and Section 5 presents our results. In Section 6 we compare our approach to previous methods, and in Section 7 we discuss future work and improvements that could be made to our system. 2 Related Work Mitchell (2009) orders sequences of at most 4 modifiers and defines nine classes that express the broad positional preferences of modifiers, where position 1 is closest to the noun phrase (NP) head and position 4 is farthest from it. Classes 1 through 4 comprise those modifiers that prefer only to be in positions 1 through 4, respectively. Class 5 through 7 modifiers prefer positions 1-2, 2-3, and 3-4, respectively, while class 8 modifiers prefer positions 1-3, and finally, class 9 modifiers prefer positions 2-4. Mitchell counts how often each word type appears in each of these positions in the training corpus. If any modifier’s probability of taking a certain position is greater than a uniform distribution would allow, then it is said to prefer that position. Each word type is then assigned a class, with a global ordering defined over the nine classes. Given a set of modifiers to order, if the entire set has been seen at training time, Mitchell’s system looks up the class of each modifier and then orders the sequence based on the predefined ordering for the classes. When two modifiers have the same class, the system picks between the possibilities randomly. If a modifier was not seen at training time and thus cannot be said to belong to a specific class, the system favors orderings where modifiers whose classes are known are as close to their classes’ preferred positions as possible. Shaw and Hatzivassiloglou (1999) use corpusbased counting methods as well. For a corpus with w word types, they define a w × w matrix where Count[A, B] indicates how often modifier A precedes modifier B. Given two modifiers a and b to order, they compare Count[a, b] and Count[b, a] in their training data. Assuming a null hypothesis that the probability of either ordering is 0.5, they use a binomial distribution to compute the probability of seeing the ordering < a, b > for Count[a, b] number of times. If this probability is above a certain threshold then they say that a precedes b. Shaw and Hatzivassiloglou also use a transitivity method to fill out parts of the Count table where bigrams are not actually seen in the training data but their counts can be inferred from other entries in the table, and they use a clustering method to group together modifiers with similar positional preferences. These methods have proven to work well, but they also suffer from sparsity issues in the training data. Mitchell reports a prediction accuracy of 78.59% for NPs of all lengths, but the accuracy of her approach is greatly reduced when two modifiers fall into the same class, since the system cannot make an informed decision in those cases. In addition, if a modifier is not seen in the training data, the system is unable to assign it a class, which also limits accuracy. Shaw and Hatzivassiloglou report a highest accuracy of 94.93% and a lowest accuracy of 65.93%, but since their methods depend heavily on bigram counts in the training corpus, they are also limited in how informed their decisions can be if modifiers in the test data are not present at training time. In this next section, we describe our maximum entropy reranking approach that tries to develop a more comprehensive model of the modifier ordering process to avoid the sparsity issues that previous ap1110 proaches have faced. 3 Model We treat the problem of prenominal modifier ordering as a reranking problem. Given a set B of prenominal modifiers and a noun phrase head H which B modifies, we define π(B) to be the set of all possible permutations, or orderings, of B. We suppose that for a set B there is some x∗∈π(B) which represents a “correct” natural-sounding ordering of the modifiers in B. At test time, we choose an ordering x ∈π(B) using a maximum entropy reranking approach (Collins and Koo, 2005). Our distribution over orderings x ∈π(B) is given by: P(x|H, B, W) = exp{W T φ(B, H, x)} ￿ x￿∈π(B) exp{W T φ(B, H, x￿)} where φ(B, H, x) is a feature vector over a particular ordering of B and W is a learned weight vector over features. We describe the set of features in section 3.1, but note that we are free under this formulation to use arbitrary features on the full ordering x of B as well as the head noun H, which we implicitly condition on throughout. Since the size of the set of prenominal modifiers B is typically less than six, enumerating π(B) is not expensive. At training time, our data consists of sequences of prenominal orderings and their corresponding nominal heads. We treat each sequence as a training example where the labeled ordering x∗∈π(B) is the one we observe. This allows us to extract any number of ‘labeled’ examples from part-of-speech text. Concretely, at training time, we select W to maximize: L(W) =   ￿ (B,H,x∗) P(x∗|H, B, W)  −￿W￿2 2σ2 where the first term represents our observed data likelihood and the second the ￿2 regularization, where σ2 is a fixed hyperparameter; we fix the value of σ2 to 0.5 throughout. We optimize this objective using standard L-BFGS optimization techniques. The key to the success of our approach is using the flexibility afforded by having arbitrary features φ(B, H, x) to capture all the salient elements of the prenominal ordering data. These features can be used to create a richer model of the modifier ordering process than previous corpus-based counting approaches. In addition, we can encapsulate previous approaches in terms of features in our model. Mitchell’s class-based approach can be expressed as a binary feature that tells us whether a given permuation satisfies the class ordering constraints in her model. Previous counting approaches can be expressed as a real-valued feature that, given all ngrams generated by a permutation of modifiers, returns the count of all these n-grams in the original training data. 3.1 Feature Selection Our features are of the form φ(B, H, x) as expressed in the model above, and we include both indicator features and real-valued numeric features in our model. We attempt to capture aspects of the modifier permutations that may be significant in the ordering process. For instance, perhaps the majority of words that end with -ly are adverbs and should usually be positioned farthest from the head noun, so we can define an indicator function that captures this feature as follows: φ(B, H, x) =    1 if the modifier in position i of ordering x ends in -ly 0 otherwise We create a feature of this form for every possible modifier position i from 1 to 4. We might also expect permutations that contain ngrams previously seen in the training data to be more natural sounding than other permutations that generate n-grams that have not been seen before. We can express this as a real-valued feature: φ(B, H, x) = ￿ count in training data of all n-grams present in x See Table 2 for a summary of our features. Many of the features we use are similar to those in Dunlop et al. (2010), which uses a feature-based multiple sequence alignment approach to order modifiers. 1111 Numeric Features n-gram Count If N is the set of all n-grams present in the permutation, returns the sum of the counts of each element of N in the training data. A separate feature is created for 2-gms through 5-gms. Count of Head Noun and Closest Modifier Returns the count of < M, H > in the training data where H is the head noun and M is the modifier closest to H. Length of Modifier∗ Returns the length of modifier in position i Indicator Features Hyphenated∗ Modifier in position i contains a hyphen. Is Word w∗ Modifier in position i is word w ￿W, where W is the set of all word types in the training data. Ends In e∗ Modifier in position i ends in suffix e ￿E, where E = {-al -ble -ed -er -est -ic -ing -ive -ly -ian} Is A Color∗ Modifier in position i is a color, where we use a list of common colors Starts With a Number∗ Modifier in position i starts with a number Is a Number∗ Modifier in position i is a number Satisfies Mitchell Class Ordering The permutation’s class ordering satisfies the Mitchell class ordering constraints Table 2: Features Used In Our Model. Features with an asterisk (*) are created for all possible modifier positions i from 1 to 4. 4 Experiments 4.1 Data Preprocessing and Selection We extracted all noun phrases from four corpora: the Brown, Switchboard, and Wall Street Journal corpora from the Penn Treebank, and the North American Newswire corpus (NANC). Since there were very few NPs with more than 5 modifiers, we kept those with 2-5 modifiers and with tags NN or NNS for the head noun. We also kept NPs with only 1 modifier to be used for generating <modifier, head noun> bigram counts at training time. We then filtered all these NPs as follows: If the NP contained a PRP, IN, CD, or DT tag and the corresponding modifier was farthest away from the head noun, we removed this modifier and kept the rest of the NP. If the modifier was not the farthest away from the head noun, we discarded the NP. If the NP contained a POS tag we only kept the part of the phrase up to this tag. Our final set of NPs had tags from the following list: JJ, NN, NNP, NNS, JJS, JJR, VBG, VBN, RB, NNPS, RBS. See Table 3 for a summary of the number of NPs of lengths 1-5 extracted from the four corpora. Our system makes several passes over the data during the training process. In the first pass, we collect statistics about the data, to be used later on when calculating our numeric features. To collect the statistics, we take each NP in the training data and consider all possible 2gms through 5-gms that are present in the NP’s modifier sequence, allowing for non-consecutive n-grams. For example, the NP “the beautiful blue Macedonian vase” generates the following bigrams: <beautiful blue>, <blue Macedonian>, and <beautiful Macedonian>, along with the 3gram <beautiful blue Macedonian>. We keep a table mapping each unique n-gram to the number of times it has been seen in the training data. In addition, we also store a table that keeps track of bigram counts for < M, H >, where H is the head noun of an NP and M is the modifier closest to it. In the example “the beautiful blue Macedonian vase,” we would increment the count of < Macedonian, vase > in the table. The n-gram and < M, H > counts are used to compute numeric fea1112 Number of Sequences (Token) 1 2 3 4 5 Total Brown 11,265 1,398 92 8 2 12,765 WSJ 36,313 9,073 1,399 229 156 47,170 Switchboard 10,325 1,170 114 4 1 11,614 NANC 15,456,670 3,399,882 543,894 80,447 14,840 19,495,733 Number of Sequences (Type) 1 2 3 4 5 Total Brown 4,071 1,336 91 8 2 5,508 WSJ 7,177 6,687 1,205 182 42 15,293 Switchboard 2,122 950 113 4 1 3,190 NANC 241,965 876,144 264,503 48,060 8,451 1,439,123 Table 3: Number of NPs extracted from our data for NP sequences with 1 to 5 modifiers. ture values. 4.2 Google n-gram Baseline The Google n-gram corpus is a collection of n-gram counts drawn from public webpages with a total of one trillion tokens – around 1 billion each of unique 3-grams, 4-grams, and 5-grams, and around 300,000 unique bigrams. We created a Google n-gram baseline that takes a set of modifiers B, determines the Google n-gram count for each possible permutation in π(B), and selects the permutation with the highest n-gram count as the winning ordering x∗. We will refer to this baseline as GOOGLE N-GRAM. 4.3 Mitchell’s Class-Based Ordering of Prenominal Modifiers (2009) Mitchell’s original system was evaluated using only three corpora for both training and testing data: Brown, Switchboard, and WSJ. In addition, the evaluation presented by Mitchell’s work considers a prediction to be correct if the ordering of classes in that prediction is the same as the ordering of classes in the original test data sequence, where a class refers to the positional preference groupings defined in the model. We use a more stringent evaluation as described in the next section. We implemented our own version of Mitchell’s system that duplicates the model and methods but allows us to scale up to a larger training set and to apply our own evaluation techniques. We will refer to this baseline as CLASS BASED. 4.4 Evaluation To evaluate our system (MAXENT) and our baselines, we partitioned the corpora into training and testing data. For each NP in the test data, we generated a set of modifiers and looked at the predicted orderings of the MAXENT, CLASS BASED, and GOOGLE N-GRAM methods. We considered a predicted sequence ordering to be correct if it matches the original ordering of the modifiers in the corpus. We ran four trials, the first holding out the Brown corpus and using it as the test set, the second holding out the WSJ corpus, the third holding out the Switchboard corpus, and the fourth holding out a randomly selected tenth of the NANC. For each trial we used the rest of the data as our training set. 5 Results The MAXENT model consistently outperforms CLASS BASED across all test corpora and sequence lengths for both tokens and types, except when testing on the Brown and Switchboard corpora for modifier sequences of length 5, for which neither approach is able to make any correct predictions. However, there are only 3 sequences total of length 5 in the Brown and Swichboard corpora combined. 1113 Test Corpus Token Accuracy (%) Type Accuracy (%) 2 3 4 5 Total 2 3 4 5 Total Brown GOOGLE N-GRAM 82.4 35.9 12.5 0 79.1 81.8 36.3 12.5 0 78.4 CLASS BASED 79.3 54.3 25.0 0 77.3 78.9 54.9 25.0 0 77.0 MAXENT 89.4 70.7 87.5 0 88.1 89.1 70.3 87.5 0 87.8 WSJ GOOGLE N-GRAM 84.8 53.5 31.4 71.8 79.4 82.6 49.7 23.1 16.7 76.0 CLASS BASED 85.5 51.6 16.6 0.6 78.5 85.1 50.1 19.2 0 78.0 MAXENT 95.9 84.1 71.2 80.1 93.5 94.7 81.9 70.3 45.2 92.0 Switchboard GOOGLE N-GRAM 92.8 68.4 0 0 90.3 91.7 68.1 0 0 88.8 CLASS BASED 80.1 52.6 0 0 77.3 79.1 53.1 0 0 75.9 MAXENT 91.4 74.6 25.0 0 89.6 90.3 75.2 25.0 0 88.4 One Tenth of GOOGLE N-GRAM 86.8 55.8 27.7 43.0 81.1 79.2 44.6 20.5 12.3 70.4 NANC CLASS BASED 86.1 54.7 20.1 1.9 80.0 80.3 51.0 18.4 3.3 74.5 MAXENT 95.2 83.8 71.6 62.2 93.0 91.6 78.8 63.8 44.4 88.0 Test Corpus Number of Features Used In MaxEnt Model Brown 655,536 WSJ 654,473 Switchboard 655,791 NANC 565,905 Table 4: Token and type prediction accuracies for the GOOGLE N-GRAM, MAXENT, and CLASS BASED approaches for modifier sequences of lengths 2-5. Our data consisted of four corpuses: Brown, Switchboard, WSJ, and NANC. The test data was held out and each approach was trained on the rest of the data. Winning scores are in bold. The number of features used during training for the MAXENT approach for each test corpus is also listed. MAXENT also outperforms the GOOGLE N-GRAM baseline for almost all test corpora and sequence lengths. For the Switchboard test corpus token and type accuracies, the GOOGLE N-GRAM baseline is more accurate than MAXENT for sequences of length 2 and overall, but the accuracy of MAXENT is competitive with that of GOOGLE N-GRAM. If we examine the error reduction between MAXENT and CLASS BASED, we attain a maximum error reduction of 69.8% for the WSJ test corpus across modifier sequence tokens, and an average error reduction of 59.1% across all test corpora for tokens. MAXENT also attains a maximum error reduction of 68.4% for the WSJ test corpus and an average error reduction of 41.8% when compared to GOOGLE NGRAM. It should also be noted that on average the MAXENT model takes three hours to train with several hundred thousand features mapped across the training data (the exact number used during each test run is listed in Table 4) – this tradeoff is well worth the increase we attain in system performance. 6 Analysis MAXENT seems to outperform the CLASS BASED baseline because it learns more from the training data. The CLASS BASED model classifies each modifier in the training data into one of nine broad categories, with each category representing a different set of positional preferences. However, many of the modifiers in the training data get classified to the same category, and CLASS BASED makes a random choice when faced with orderings of modifiers all in the same category. When applying CLASS BASED 1114 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 Sequences of 2 Modifiers Portion of NANC Used in Training (%) Correct Predictions (%) MaxEnt ClassBased (a) 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 Sequences of 3 Modifiers Portion of NANC Used in Training (%) Correct Predictions (%) MaxEnt ClassBased (b) 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 Sequences of 4 Modifiers Portion of NANC Used in Training (%) Correct Predictions (%) MaxEnt ClassBased (c) 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 Sequences of 5 Modifiers Portion of NANC Used in Training (%) Correct Predictions (%) MaxEnt ClassBased (d) 0 20 40 60 80 100 0 10 20 30 40 50 60 70 80 90 100 All Modifier Sequences Portion of NANC Used in Training (%) Correct Predictions (%) MaxEnt ClassBased (e) 0 20 40 60 80 100 0 1 2 3 4 5 6 7 x 10 5 Features Used by MaxEnt Model Portion of NANC Used in Training (%) Number of Features Used (f) Figure 1: Learning curves for the MAXENT and CLASS BASED approaches. We start by training each approach on just the Brown and Switchboard corpora while testing on WSJ. We incrementally add portions of the NANC corpus. Graphs (a) through (d) break down the total correct predictions by the number of modifiers in a sequence, while graph (e) gives accuracies over modifier sequences of all lengths. Prediction percentages are for sequence tokens. Graph (f) shows the number of features active in the MaxEnt model as the training data scales up. 1115 to WSJ as the test data and training on the other corpora, 74.7% of the incorrect predictions contained at least 2 modifiers that were of the same positional preferences class. In contrast, MAXENT allows us to learn much more from the training data. As a result, we see much higher numbers when trained and tested on the same data as CLASS BASED. The GOOGLE N-GRAM method does better than the CLASS BASED approach because it contains ngram counts for more data than the WSJ, Brown, Switchboard, and NANC corpora combined. However, GOOGLE N-GRAM suffers from sparsity issues as well when testing on less common modifier combinations. For example, our data contains rarely heard sequences such as “Italian, state-owned, holding company” or “armed Namibian nationalist guerrillas.” While MAXENT determines the correct ordering for both of these examples, none of the permutations of either example show up in the Google n-gram corpus, so the GOOGLE N-GRAM method is forced to randomly select from the six possibilities. In addition, the Google n-gram corpus is composed of sentence fragments that may not necessarily be NPs, so we may be overcounting certain modifier permutations that can function as different parts of a sentence. We also compared the effect that increasing the amount of training data has when using the CLASS BASED and MAXENT methods by initially training each system with just the Brown and Switchboard corpora and testing on WSJ. Then we incrementally added portions of NANC, one tenth at a time, until the training set included all of it. The results (see Figure 1) show that we are able to benefit from the additional data much more than the CLASS BASED approach can, since we do not have a fixed set of classes limiting the amount of information the model can learn. In addition, adding the first tenth of NANC made the biggest difference in increasing accuracy for both approaches. 7 Conclusion The straightforward maximum entropy reranking approach is able to significantly outperform previous computational approaches by allowing for a richer model of the prenominal modifier ordering process. Future work could include adding more features to the model and conducting ablation testing. In addition, while many sets of modifiers have stringent ordering requirements, some variations on orderings, such as “former famous actor” vs. “famous former actor,” are acceptable in both forms and have different meanings. It may be beneficial to extend the model to discover these ambiguities. Acknowledgements Many thanks to Margaret Mitchell, Regina Barzilay, Xiao Chen, and members of the CSAIL NLP group for their help and suggestions. References M. Collins and T. Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. R. M. W. Dixon. 1977. Where Have all the Adjectives Gone? Studies in Language, 1(1):19–80. A. Dunlop, M. Mitchell, and B. Roark. 2010. Prenominal modifier ordering via multiple sequence alignment. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 600– 608. Association for Computational Linguistics. R. Malouf. 2000. The order of prenominal adjectives in natural language generation. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 85–92. Association for Computational Linguistics. M. Mitchell. 2009. Class-based ordering of prenominal modifiers. In Proceedings of the 12th European Workshop on Natural Language Generation, pages 50–57. Association for Computational Linguistics. R. Quirk, S. Greenbaum, R.A. Close, and R. Quirk. 1974. A university grammar of English, volume 1985. Longman London. J. Shaw and V. Hatzivassiloglou. 1999. Ordering among premodifiers. In Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics, pages 135–143. Association for Computational Linguistics. R. Sproat and C. Shih. 1991. The cross-linguistic distribution of adjective ordering restrictions. Interdisciplinary approaches to language, pages 565–593. A. Teodorescu. 2006. Adjective Ordering Restrictions Revisited. In Proceedings of the 25th West Coast Conference on Formal Linguistics, pages 399–407. West Coast Conference on Formal Linguistics. 1116
2011
111
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1117–1126, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Unsupervised Semantic Role Induction via Split-Merge Clustering Joel Lang and Mirella Lapata Institute for Language, Cognition and Computation School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh EH8 9AB, UK [email protected], [email protected] Abstract In this paper we describe an unsupervised method for semantic role induction which holds promise for relieving the data acquisition bottleneck associated with supervised role labelers. We present an algorithm that iteratively splits and merges clusters representing semantic roles, thereby leading from an initial clustering to a final clustering of better quality. The method is simple, surprisingly effective, and allows to integrate linguistic knowledge transparently. By combining role induction with a rule-based component for argument identification we obtain an unsupervised end-to-end semantic role labeling system. Evaluation on the CoNLL 2008 benchmark dataset demonstrates that our method outperforms competitive unsupervised approaches by a wide margin. 1 Introduction Recent years have seen increased interest in the shallow semantic analysis of natural language text. The term is most commonly used to describe the automatic identification and labeling of the semantic roles conveyed by sentential constituents (Gildea and Jurafsky, 2002). Semantic roles describe the relations that hold between a predicate and its arguments, abstracting over surface syntactic configurations. In the example sentences below. window occupies different syntactic positions — it is the object of broke in sentences (1a,b), and the subject in (1c) — while bearing the same semantic role, i.e., the physical object affected by the breaking event. Analogously, rock is the instrument of break both when realized as a prepositional phrase in (1a) and as a subject in (1b). (1) a. [Joe]A0 broke the [window]A1 with a [rock]A2. b. The [rock]A2 broke the [window]A1. c. The [window]A1 broke. The semantic roles in the examples are labeled in the style of PropBank (Palmer et al., 2005), a broad-coverage human-annotated corpus of semantic roles and their syntactic realizations. Under the PropBank annotation framework (which we will assume throughout this paper) each predicate is associated with a set of core roles (named A0, A1, A2, and so on) whose interpretations are specific to that predicate1 and a set of adjunct roles (e.g., location or time) whose interpretation is common across predicates. This type of semantic analysis is admittedly shallow but relatively straightforward to automate and useful for the development of broad coverage, domain-independent language understanding systems. Indeed, the analysis produced by existing semantic role labelers has been shown to benefit a wide spectrum of applications ranging from information extraction (Surdeanu et al., 2003) and question answering (Shen and Lapata, 2007), to machine translation (Wu and Fung, 2009) and summarization (Melli et al., 2005). Since both argument identification and labeling can be readily modeled as classification tasks, most state-of-the-art systems to date conceptualize se1More precisely, A0 and A1 have a common interpretation across predicates as proto-agent and proto-patient in the sense of Dowty (1991). 1117 mantic role labeling as a supervised learning problem. Current approaches have high performance — a system will recall around 81% of the arguments correctly and 95% of those will be assigned a correct semantic role (see M`arquez et al. (2008) for details), however only on languages and domains for which large amounts of role-annotated training data are available. For instance, systems trained on PropBank demonstrate a marked decrease in performance (approximately by 10%) when tested on out-of-domain data (Pradhan et al., 2008). Unfortunately, the reliance on role-annotated data which is expensive and time-consuming to produce for every language and domain, presents a major bottleneck to the widespread application of semantic role labeling. Given the data requirements for supervised systems and the current paucity of such data, unsupervised methods offer a promising alternative. They require no human effort for training thus leading to significant savings in time and resources required for annotating text. And their output can be used in different ways, e.g., as a semantic preprocessing step for applications that require broad coverage understanding or as training material for supervised algorithms. In this paper we present a simple approach to unsupervised semantic role labeling. Following common practice, our system proceeds in two stages. It first identifies the semantic arguments of a predicate and then assigns semantic roles to them. Both stages operate over syntactically analyzed sentences without access to any data annotated with semantic roles. Argument identification is carried out through a small set of linguistically-motivated rules, whereas role induction is treated as a clustering problem. In this setting, the goal is to assign argument instances to clusters such that each cluster contains arguments corresponding to a specific semantic role and each role corresponds to exactly one cluster. We formulate a clustering algorithm that executes a series of split and merge operations in order to transduce an initial clustering into a final clustering of better quality. Split operations leverage syntactic cues so as to create “pure” clusters that contain arguments of the same role whereas merge operations bring together argument instances of a particular role located in different clusters. We test the effectiveness of our induction method on the CoNLL 2008 benchmark dataset and demonstrate improvements over competitive unsupervised methods by a wide margin. 2 Related Work As mentioned earlier, much previous work has focused on building supervised SRL systems (M`arquez et al., 2008). A few semi-supervised approaches have been developed within a framework known as annotation projection. The idea is to combine labeled and unlabeled data by projecting annotations from a labeled source sentence onto an unlabeled target sentence within the same language (F¨urstenau and Lapata, 2009) or across different languages (Pad´o and Lapata, 2009). Outwith annotation projection, Gordon and Swanson (2007) attempt to increase the coverage of PropBank by leveraging existing labeled data. Rather than annotating new sentences that contain previously unseen verbs, they find syntactically similar verbs and use their annotations as surrogate training data. Swier and Stevenson (2004) induce role labels with a bootstrapping scheme where the set of labeled instances is iteratively expanded using a classifier trained on previously labeled instances. Their method is unsupervised in that it starts with a dataset containing no role annotations at all. However, it requires significant human effort as it makes use of VerbNet (Kipper et al., 2000) in order to identify the arguments of predicates and make initial role assignments. VerbNet is a broad coverage lexicon organized into verb classes each of which is explicitly associated with argument realization and semantic role specifications. Abend et al. (2009) propose an algorithm that identifies the arguments of predicates by relying only on part of speech annotations, without, however, assigning semantic roles. In contrast, Lang and Lapata (2010) focus solely on the role induction problem which they formulate as the process of detecting alternations and finding a canonical syntactic form for them. Verbal arguments are then assigned roles, according to their position in this canonical form, since each position references a specific role. Their model extends the logistic classifier with hidden variables and is trained in a manner that makes use of the close relationship between syntactic functions and semantic roles. Grenager and Manning 1118 (2006) propose a directed graphical model which relates a verb, its semantic roles, and their possible syntactic realizations. Latent variables represent the semantic roles of arguments and role induction corresponds to inferring the state of these latent variables. Our own work also follows the unsupervised learning paradigm. We formulate the induction of semantic roles as a clustering problem and propose a split-merge algorithm which iteratively manipulates clusters representing semantic roles. The motivation behind our approach was to design a conceptually simple system, that allows for the incorporation of linguistic knowledge in a straightforward and transparent manner. For example, arguments occurring in similar syntactic positions are likely to bear the same semantic role and should therefore be grouped together. Analogously, arguments that are lexically similar are likely to represent the same semantic role. We operationalize these notions using a scoring function that quantifies the compatibility between arbitrary cluster pairs. Like Lang and Lapata (2010) and Grenager and Manning (2006) our method operates over syntactically parsed sentences, without, however, making use of any information pertaining to semantic roles (e.g., in form of a lexical resource or manually annotated data). Performing role-semantic analysis without a treebanktrained parser is an interesting research direction, however, we leave this to future work. 3 Learning Setting We follow the general architecture of supervised semantic role labeling systems. Given a sentence and a designated verb, the SRL task consists of identifying the arguments of the verbal predicate (argument identification) and labeling them with semantic roles (role induction). In our case neither argument identification nor role induction relies on role-annotated data or other semantic resources although we assume that the input sentences are syntactically analyzed. Our approach is not tied to a specific syntactic representation — both constituent- and dependency-based representations could be used. However, we opted for a dependency-based representation, as it simplifies argument identification considerably and is consistent with the CoNLL 2008 benchmark dataset used for evaluation in our experiments. Given a dependency parse of a sentence, our system identifies argument instances and assigns them to clusters. Thereafter, argument instances can be labeled with an identifier corresponding to the cluster they have been assigned to, similar to PropBank core labels (e.g., A0, A1). 4 Argument Identification In the supervised setting, a classifier is employed in order to decide for each node in the parse tree whether it represents a semantic argument or not. Nodes classified as arguments are then assigned a semantic role. In the unsupervised setting, we slightly reformulate argument identification as the task of discarding as many non-semantic arguments as possible. This means that the argument identification component does not make a final positive decision for any of the argument candidates; instead, this decision is deferred to role induction. The rules given in Table 1 are used to discard or select argument candidates. They primarily take into account the parts of speech and the syntactic relations encountered when traversing the dependency tree from predicate to argument. For each candidate, the first matching rule is applied. We will exemplify how the argument identification component works for the predicate expect in the sentence “The company said it expects its sales to remain steady” whose parse tree is shown in Figure 1. Initially, all words save the predicate itself are treated as argument candidates. Then, the rules from Table 1 are applied as follows. Firstly, words the and to are discarded based on their part of speech (rule (1)); then, remain is discarded because the path ends with the relation IM and said is discarded as the path ends with an upward-leading OBJ relation (rule (2)). Rule (3) does not match and is therefore not applied. Next, steady is discarded because there is a downward-leading OPRD relation along the path and the words company and its are discarded because of the OBJ relations along the path (rule (4)). Rule (5) does not apply but words it and sales are kept as likely arguments (rule (6)). Finally, rule (7) does not apply, because there are no candidates left. 1119 1. Discard a candidate if it is a determiner, infinitival marker, coordinating conjunction, or punctuation. 2. Discard a candidate if the path of relations from predicate to candidate ends with coordination, subordination, etc. (see the Appendix for the full list of relations). 3. Keep a candidate if it is the closest subject (governed by the subject-relation) to the left of a predicate and the relations from predicate p to the governor g of the candidate are all upward-leading (directed as g →p). 4. Discard a candidate if the path between the predicate and the candidate, excluding the last relation, contains a subject relation, adjectival modifier relation, etc. (see the Appendix for the full list of relations). 5. Discard a candidate if it is an auxiliary verb. 6. Keep a candidate if the predicate is its parent. 7. Keep a candidate if the path from predicate to candidate leads along several verbal nodes (verb chain) and ends with arbitrary relation. 8. Discard all remaining candidates. Table 1: Argument identification rules. 5 Split-Merge Role Induction We treat role induction as a clustering problem with the goal of assigning argument instances (i.e., specific arguments occurring in an input sentence) to clusters such that these represent semantic roles. In accordance with PropBank, we induce a separate set of clusters for each verb and each cluster thus represents a verb-specific role. Our algorithm works by iteratively splitting and merging clusters of argument instances in order to arrive at increasingly accurate representations of semantic roles. Although splits and merges could be arbitrarily interleaved, our algorithm executes a single split operation (split phase), followed by a series of merges (merge phase). The split phase partitions the seed cluster containing all argument instances of a particular verb into more fine-grained (sub-)clusters. This initial split results in a clustering with high purity but low collocation, i.e., argument instances in each cluster tend to belong to the same role but argument instances of a particular role are Figure 1: A sample dependency parse with dependency labels SBJ (subject), OBJ (object), NMOD (nominal modifier), OPRD (object predicative complement), PRD (predicative complement), and IM (infinitive marker). See Surdeanu et al. (2008) for more details on this variant of dependency syntax. located in many clusters. The degree of dislocation is reduced in the consecutive merge phase, in which clusters that are likely to represent the same role are merged. 5.1 Split Phase Initially, all arguments of a particular verb are placed in a single cluster. The goal then is to partition this cluster in such a way that the split-off clusters have high purity, i.e., contain argument instances of the same role. Towards this end, we characterize each argument instance by a key, formed by concatenating the following syntactic cues: • verb voice (active/passive); • argument linear position relative to predicate (left/right); • syntactic relation of argument to its governor; • preposition used for argument realization. A cluster is allocated for each key and all argument instances with a matching key are assigned to that cluster. Since each cluster encodes fine-grained syntactic distinctions, we assume that arguments occurring in the same position are likely to bear the same semantic role. The assumption is largely supported by our empirical results (see Section 7); the clusters emerging from the initial split phase have a purity of approximately 90%. While the incorporation of additional cues (e.g., indicating the part of speech of the subject or transitivity) would result in even greater purity, it would also create problematically small clusters, thereby negatively affecting the successive merge phase. 1120 5.2 Merge Phase The split phase creates clusters with high purity, however, argument instances of a particular role are often scattered amongst many clusters resulting in a cluster assignment with low collocation. The goal of the merge phase is to improve collocation by executing a series of merge steps. At each step, pairs of clusters are considered for merging. Each pair is scored by a function that reflects how likely the two clusters are to contain arguments of the same role and the best scoring pair is chosen for merging. In the following, we will specify which pairs of clusters are considered (candidate search), how they are scored, and when the merge phase terminates. 5.2.1 Candidate Search In principle, we could simply enumerate and score all possible cluster pairs at each iteration. In practice however, such a procedure has a number of drawbacks. Besides being inefficient, it requires a scoring function with comparable scores for arbitrary pairs of clusters. For example, let a, b, c, and d denote clusters. Then, score(a,b) and score(c,d) must be comparable. This is a stronger requirement than demanding that only scores involving some common cluster (e.g., score(a,b) and score(a,c)) be comparable. Moreover, it would be desirable to exclude pairings involving small clusters (i.e., with few instances) as scores for these tend to be unreliable. Rather than considering all cluster pairings, we therefore select a specific cluster at each step and score merges between this cluster and certain other clusters. If a sufficiently good merge is found, it is executed, otherwise the clustering does not change. In addition, we prioritize merges between large clusters and avoid merges between small clusters. Algorithm 1 implements our merging procedure. Each pass through the inner loop (lines 4–12) selects a different cluster to consider at that step. Then, merges between the selected cluster and all larger clusters are considered. The highest-scoring merge is executed, unless all merges are ruled out, i.e., have a score below the threshold α. After each completion of the inner loop, the thresholds contained in the scoring function (discussed below) are adjusted and this is repeated until some termination criterion is met (discussed in Section 5.2.3). Algorithm 1: Cluster merging procedure. Operation merge(Li,L j) merges cluster Li into cluster L j and removes Li from the list L. 1 while not done do 2 L ←a list of all clusters sorted by number of instances in descending order 3 i ←1 4 while i < length(L) do 5 j ←arg max 0≤j′<iscore(Li,L j′) 6 if score(Li,L j) ≥α then 7 merge(Li,L j) 8 end 9 else 10 i ←i+1 11 end 12 end 13 adjust thresholds 14 end 5.2.2 Scoring Function Our scoring function quantifies whether two clusters are likely to contain arguments of the same role and was designed to reflect the following criteria: 1. whether the arguments found in the two clusters are lexically similar; 2. whether clause-level constraints are satisfied, specifically the constraint that all arguments of a particular clause have different semantic roles, i.e., are assigned to different clusters; 3. whether the arguments present in the two clusters have similar parts of speech. Qualitatively speaking, criteria (2) and (3) provide negative evidence in the sense that they can be used to rule out incorrect merges but not to identify correct ones. For example, two clusters with drastically different parts of speech are unlikely to represent the same role. However, the converse is not necessarily true as part of speech similarity does not imply role-semantic similarity. Analogously, the fact that clause-level constraints are not met provides evidence against a merge, but the fact that these are satisfied is not reliable evidence in favor of a merge. In contrast, lexical similarity implies that the clus1121 ters are likely to represent the same semantic role. It is reasonable to assume that due to selectional restrictions, verbs will be associated with lexical units that are semantically related and assume similar syntactic positions (e.g., eat prefers as an object edible things such as apple, biscuit, meat), thus bearing the same semantic role. Unavoidably, lexical similarity will be more reliable for arguments with overt lexical content as opposed to pronouns, however this should not impact the scoring of sufficiently large clusters. Each of the criteria mentioned above is quantified through a separate score and combined into an overall similarity function, which scores two clusters c and c′ as follows: score(c,c′) =      0 if pos(c,c′) < β, 0 if cons(c,c′) < γ, lex(c,c′) otherwise. (2) The particular form of this function is motivated by the distinction between positive and negative evidence. When the part-of-speech similarity (pos) is below a certain threshold β or when clause-level constraints (cons) are satisfied to a lesser extent than threshold γ, the score takes value zero and the merge is ruled out. If this is not the case, the lexical similarity score (lex) determines the magnitude of the overall score. In the remainder of this section we will explain how the individual scores (pos, cons, and lex) are defined and then move on to discuss how the thresholds β and γ are adjusted. Lexical Similarity We measure lexical similarity between two clusters through cosine similarity. Specifically, each cluster is represented as a vector whose components correspond to the occurrence frequencies of the argument head words in the cluster. The similarity on such vectors x and y is then quantified as: lex(x,y) = cossim(x,y) = x·y ∥x∥∥y∥ (3) Clause-Level Constraints Arguments occurring in the same clause cannot bear the same role. Therefore, clusters should not merge if the resulting cluster contains (many) arguments of the same clause. For two clusters c and c′ we assess how well they satisfy this clause-level constraint by computing: cons(c,c′) = 1−2∗viol(c,c′) NC +NC′ (4) where viol(c,c′) refers to the number of pairs of instances (d,d′) ∈c×c′ for which d and d′ occur in the same clause (each instance can participate in at most one pair) and NC and NC′ are the number of instances in clusters c and c′, respectively. Part-of-speech Similarity Part-of-speech similarity is also measured through cosine-similarity (equation (3)). Clusters are again represented as vectors x and y whose components correspond to argument part-of-speech tags and values to their occurrence frequency. 5.2.3 Threshold Adaptation and Termination As mentioned earlier the thresholds β and γ which parametrize the scoring function are adjusted at each iteration. The idea is to start with a very restrictive setting (high values) in which the negative evidence rules out merges more strictly, and then to gradually relax the requirement for a merge by lowering the threshold values. This procedure prioritizes reliable merges over less reliable ones. More concretely, our threshold adaptation procedure starts with β and γ both set to value 0.95. Then β is lowered by 0.05 at each step, leaving γ unchanged. When β becomes zero, γ is lowered by 0.05 and β is reset to 0.95. Then β is iteratively decreased again until it becomes zero, after which γ is decreased by another 0.05. This is repeated until γ becomes zero, at which point the algorithm terminates. Note that the termination criterion is not tied explicitly to the number of clusters, which is therefore determined automatically. 6 Experimental Setup In this section we describe how we assessed the performance of our system. We discuss the dataset on which our experiments were carried out, explain how our system’s output was evaluated and present the methods used for comparison with our approach. Data For evaluation purposes, the system’s output was compared against the CoNLL 2008 shared task dataset (Surdeanu et al., 2008) which provides 1122 Syntactic Function Lang and Lapata Split-Merge PU CO F1 PU CO F1 PU CO F1 auto/auto 72.9 73.9 73.4 73.2 76.0 74.6 81.9 71.2 76.2 gold/auto 77.7 80.1 78.9 75.6 79.4 77.4 84.0 74.4 78.9 auto/gold 77.0 71.0 73.9 77.9 74.4 76.2 86.5 69.8 77.3 gold/gold 81.6 77.5 79.5 79.5 76.5 78.0 88.7 73.0 80.1 Table 2: Clustering results with our split-merge algorithm, the unsupervised model proposed in Lang and Lapata (2010) and a baseline that assigns arguments to clusters based on their syntactic function. PropBank-style gold standard annotations. The dataset was taken from the Wall Street Journal portion of the Penn Treebank corpus and converted into a dependency format (Surdeanu et al., 2008). In addition to gold standard dependency parses, the dataset also contains automatic parses obtained from the MaltParser (Nivre et al., 2007). Although the dataset provides annotations for verbal and nominal predicate-argument constructions, we only considered the former, following previous work on semantic role labeling (M`arquez et al., 2008). Evaluation Metrics For each verb, we determine the extent to which argument instances in a cluster share the same gold standard role (purity) and the extent to which a particular gold standard role is assigned to a single cluster (collocation). More formally, for each group of verb-specific clusters we measure the purity of the clusters as the percentage of instances belonging to the majority gold class in their respective cluster. Let N denote the total number of instances, G j the set of instances belonging to the j-th gold class and Ci the set of instances belonging to the i-th cluster. Purity can then be written as: PU = 1 N ∑ i max j |Gj ∩Ci| (5) Collocation is defined as follows. For each gold role, we determine the cluster with the largest number of instances for that role (the role’s primary cluster) and then compute the percentage of instances that belong to the primary cluster for each gold role as: CO = 1 N ∑ j max i |G j ∩Ci| (6) The per-verb scores are aggregated into an overall score by averaging over all verbs. We use the microaverage obtained by weighting the scores for individual verbs proportionately to the number of instances for that verb. Finally, we use the harmonic mean of purity and collocation as a single measure of clustering quality: F1 = 2×CO×PU CO+PU (7) Comparison Models We compared our splitmerge algorithm against two competitive approaches. The first one assigns argument instances to clusters according to their syntactic function (e.g., subject, object) as determined by a parser. This baseline has been previously used as point of comparison by other unsupervised semantic role labeling systems (Grenager and Manning, 2006; Lang and Lapata, 2010) and shown difficult to outperform. Our implementation allocates up to N = 21 clusters2 for each verb, one for each of the 20 most frequent functions in the CoNLL dataset and a default cluster for all other functions. The second comparison model is the one proposed in Lang and Lapata (2010) (see Section 2). We used the same model settings (with 10 latent variables) and feature set proposed in that paper. Our method’s only parameter is the threshold α which we heuristically set to 0.1. On average our method induces 10 clusters per verb. 7 Results Our results are summarized in Table 2. We report cluster purity (PU), collocation (CO) and their harmonic mean (F1) for the baseline (Syntactic Function), Lang and Lapata’s (2010) model and our split-merge algorithm (Split-Merge) on four 2This is the number of gold standard roles. 1123 Syntactic Function Split-Merge Verb Freq PU CO F1 PU CO F1 say 15238 91.4 91.3 91.4 93.6 81.7 87.2 make 4250 68.6 71.9 70.2 73.3 72.9 73.1 go 2109 45.1 56.0 49.9 52.7 51.9 52.3 increase 1392 59.7 68.4 63.7 68.8 71.4 70.1 know 983 62.4 72.7 67.1 63.7 65.9 64.8 tell 911 61.9 76.8 68.6 77.5 70.8 74.0 consider 753 63.5 65.6 64.5 79.2 61.6 69.3 acquire 704 75.9 79.7 77.7 80.1 76.6 78.3 meet 574 76.7 76.0 76.3 88.0 69.7 77.8 send 506 69.6 63.8 66.6 83.6 65.8 73.6 open 482 63.1 73.4 67.9 77.6 62.2 69.1 break 246 53.7 58.9 56.2 68.7 53.3 60.0 Table 3: Clustering results for individual verbs with our split-merge algorithm and the syntactic function baseline. datasets. These result from the combination of automatic parses with automatically identified arguments (auto/auto), gold parses with automatic arguments (gold/auto), automatic parses with gold arguments (auto/gold) and gold parses with gold arguments (gold/gold). Bold-face is used to highlight the best performing system under each measure on each dataset (e.g., auto/auto, gold/auto and so on). On all datasets, our method achieves the highest purity and outperforms both comparison models by a wide margin which in turn leads to a considerable increase in F1. On the auto/auto dataset the splitmerge algorithm results in 9% higher purity than the baseline and increases F1 by 2.8%. Lang and Lapata’s (2010) logistic classifier achieves higher collocation but lags behind our method on the other two measures. Not unexpectedly, we observe an increase in performance for all models when using gold standard parses. On the gold/auto dataset, F1 increases by 2.7% for the split-merge algorithm, 2.7% for the logistic classifier, and 5.5% for the syntactic function baseline. Split-Merge maintains the highest purity and levels the baseline in terms of F1. Performance also increases if gold standard arguments are used instead of automatically identified arguments. Consequently, each model attains its best scores on the gold/gold dataset. We also assessed the argument identification comSyntactic Function Split-Merge Role PU CO F1 PU CO F1 A0 74.5 87.0 80.3 79.0 88.7 83.6 A1 82.3 72.0 76.8 87.1 73.0 79.4 A2 65.0 67.3 66.1 82.8 66.2 73.6 A3 48.7 76.7 59.6 79.6 76.3 77.9 ADV 37.2 77.3 50.2 78.8 37.3 50.6 CAU 81.8 74.4 77.9 84.8 67.2 75.0 DIR 62.7 67.9 65.2 71.0 50.7 59.1 EXT 51.4 87.4 64.7 90.4 87.2 88.8 LOC 71.5 74.6 73.0 82.6 56.7 67.3 MNR 62.6 58.8 60.6 81.5 44.1 57.2 TMP 80.5 74.0 77.1 80.1 38.7 52.2 MOD 68.2 44.4 53.8 90.4 89.6 90.0 NEG 38.2 98.5 55.0 49.6 98.8 66.1 DIS 42.5 87.5 57.2 62.2 75.4 68.2 Table 4: Clustering results for individual semantic roles with our split-merge algorithm and the syntactic function baseline. ponent on its own (settings auto/auto and gold/auto). It obtained a precision of 88.1% (percentage of semantic arguments out of those identified) and recall of 87.9% (percentage of identified arguments out of all gold arguments). However, note that these figures are not strictly comparable to those reported for supervised systems, due to the fact that our argument identification component only discards nonargument candidates. Tables 3 and 4 shows how performance varies across verbs and roles, respectively. We compare the syntactic function baseline and the split-merge system on the auto/auto dataset. Table 3 presents results for 12 verbs which we selected so as to exhibit varied occurrence frequencies and alternation patterns. As can be seen, the macroscopic result — increase in F1 (shown in bold face) and purity — also holds across verbs. Some caution is needed in interpreting the results in Table 43 since core roles A0–A3 are defined on a per-verb basis and do not necessarily have a uniform corpus-wide interpretation. Thus, conflating scores across verbs is only meaningful to the extent that these labels actually signify the same 3Results are shown for four core roles (A0–A3) and all subtypes of the ArgM role, i.e., adjuncts denoting general purpose (ADV), cause (CAU), direction (DIR), extent (EXT), location (LOC), manner (MNR), and time (TMP), modal verbs (MOD), negative markers (NEG), and discourse connectives (DIS). 1124 role (which is mostly true for A0 and A1). Furthermore, the purity scores given here represent the average purity of those clusters for which the specified role is the majority role. We observe that for most roles shown in Table 4 the split-merge algorithm improves upon the baseline with regard to F1, whereas this is uniformly the case for purity. What are the practical implications of these results, especially when considering the collocationpurity tradeoff? If we were to annotate the clusters induced by our system, low collocation would result in higher annotation effort while low purity would result in poorer data quality. Our system improves purity substantially over the baselines, without affecting collocation in a way that would massively increase the annotation effort. As an example, consider how our system could support humans in labeling an unannotated corpus. (The following numbers are derived from the CoNLL dataset4 in the auto/auto setting.) We might decide to annotate all induced clusters with more than 10 instances. This means we would assign labels to 74% of instances in the dataset (excluding those discarded during argument identification) and attain a role classification with 79.4% precision (purity).5 However, instead of labeling all 165,662 instances contained in these clusters individually we would only have to assign labels to 2,869 clusters. Since annotating a cluster takes roughly the same time as annotating a single instance, the annotation effort is reduced by a factor of about 50. 8 Conclusions In this paper we presented a novel approach to unsupervised role induction which we formulated as a clustering problem. We proposed a split-merge algorithm that iteratively manipulates clusters representing semantic roles whilst trading off cluster purity with collocation. The split phase creates “pure” clusters that contain arguments of the same role whereas the merge phase attempts to increase collocation by merging clusters which are likely to represent the same role. The approach is simple, intu4Of course, it makes no sense to label this dataset as it is already labeled. 5Purity here is slightly lower than the score reported in Table 2 (auto/auto setting), because it is computed over a different number of clusters (only those with at least 10 instances). itive and requires no manual effort for training. Coupled with a rule-based component for automatically identifying argument candidates our split-merge algorithm forms an end-to-end system that is capable of inducing role labels without any supervision. Our approach holds promise for reducing the data acquisition bottleneck for supervised systems. It could be usefully employed in two ways: (a) to create preliminary annotations, thus supporting the “annotate automatically, correct manually” methodology used for example to provide high volume annotation in the Penn Treebank project; and (b) in combination with supervised methods, e.g., by providing useful out-of-domain data for training. An important direction for future work lies in investigating how the approach generalizes across languages as well as reducing our system’s reliance on a treebank-trained parser. Acknowledgments We are grateful to Charles Sutton for his valuable feedback on this work. The authors acknowledge the support of EPSRC (grant GR/T04540/01). Appendix The relations in Rule (2) from Table 1 are IM↑↓, PRT↓, COORD↑↓, P↑↓, OBJ↑, PMOD↑, ADV↑, SUB↑↓, ROOT↑, TMP↑, SBJ↑, OPRD↑. The symbols ↑and ↓denote the direction of the dependency arc (upward and downward, respectively). The relations in Rule (3) are ADV↑↓, AMOD↑↓, APPO↑↓, BNF↑↓-, CONJ↑↓, COORD↑↓, DIR↑↓, DTV↑↓-, EXT↑↓, EXTR↑↓, HMOD↑↓, IOBJ↑↓, LGS↑↓, LOC↑↓, MNR↑↓, NMOD↑↓, OBJ↑↓, OPRD↑↓, POSTHON↑↓, PRD↑↓, PRN↑↓, PRP↑↓, PRT↑↓, PUT↑↓, SBJ↑↓, SUB↑↓, SUFFIX↑↓. Dependency labels are abbreviated here. A detailed description is given in Surdeanu et al. (2008), in their Table 4. References O. Abend, R. Reichart, and A. Rappoport. 2009. Unsupervised Argument Identification for Semantic Role Labeling. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, pages 28–36, Singapore. 1125 D. Dowty. 1991. Thematic Proto Roles and Argument Selection. Language, 67(3):547–619. H. F¨urstenau and M. Lapata. 2009. Graph Aligment for Semi-Supervised Semantic Role Labeling. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 11–20, Singapore. D. Gildea and D. Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245–288. A. Gordon and R. Swanson. 2007. Generalizing Semantic Role Annotations Across Syntactically Similar Verbs. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 192–199, Prague, Czech Republic. T. Grenager and C. Manning. 2006. Unsupervised Discovery of a Statistical Verb Lexicon. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, pages 1–8, Sydney, Australia. K. Kipper, H. T. Dang, and M. Palmer. 2000. ClassBased Construction of a Verb Lexicon. In Proceedings of the 17th AAAI Conference on Artificial Intelligence, pages 691–696. AAAI Press / The MIT Press. J. Lang and M. Lapata. 2010. Unsupervised Induction of Semantic Roles. In Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 939– 947, Los Angeles, California. L. M`arquez, X. Carreras, K. Litkowski, and S. Stevenson. 2008. Semantic Role Labeling: an Introduction to the Special Issue. Computational Linguistics, 34(2):145– 159, June. G. Melli, Y. Wang, Y. Liu, M. M. Kashani, Z. Shi, B. Gu, A. Sarkar, and F. Popowich. 2005. Description of SQUASH, the SFU Question Answering Summary Handler for the DUC-2005 Summarization Task. In Proceedings of the Human Language Technology Conference and the Conference on Empirical Methods in Natural Language Processing Document Understanding Workshop, Vancouver, Canada. J. Nivre, J. Hall, J. Nilsson, G. Eryigit A. Chanev, S. K¨ubler, S. Marinov, and E. Marsi. 2007. MaltParser: A Language-independent System for Datadriven Dependency Parsing. Natural Language Engineering, 13(2):95–135. S. Pad´o and M. Lapata. 2009. Cross-lingual Annotation Projection of Semantic Roles. Journal of Artificial Intelligence Research, 36:307–340. M. Palmer, D. Gildea, and P. Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106. S. Pradhan, W. Ward, and J. Martin. 2008. Towards Robust Semantic Role Labeling. Computational Linguistics, 34(2):289–310. D. Shen and M. Lapata. 2007. Using Semantic Roles to Improve Question Answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and the Conference on Computational Natural Language Learning, pages 12–21, Prague, Czech Republic. M. Surdeanu, S. Harabagiu, J. Williams, and P. Aarseth. 2003. Using Predicate-Argument Structures for Information Extraction. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 8–15, Sapporo, Japan. M. Surdeanu, R. Johansson, A. Meyers, and L. M`arquez. 2008. The CoNLL-2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. In Proceedings of the 12th CoNLL, pages 159–177, Manchester, England. R. Swier and S. Stevenson. 2004. Unsupervised Semantic Role Labelling. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, pages 95–102, Barcelona, Spain. D. Wu and P. Fung. 2009. Semantic Roles for SMT: A Hybrid Two-Pass Model. In Proceedings of North American Annual Meeting of the Association for Computational Linguistics HLT 2009: Short Papers, pages 13–16, Boulder, Colorado. 1126
2011
112
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1127–1136, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Using Cross-Entity Inference to Improve Event Extraction Yu Hong Jianfeng Zhang Bin Ma Jianmin Yao Guodong Zhou Qiaoming Zhu School of Computer Science and Technology, Soochow University, Suzhou City, China {hongy, jfzhang, bma, jyao, gdzhou, qmzhu}@suda.edu.cn Abstract Event extraction is the task of detecting certain specified types of events that are mentioned in the source language data. The state-of-the-art research on the task is transductive inference (e.g. cross-event inference). In this paper, we propose a new method of event extraction by well using cross-entity inference. In contrast to previous inference methods, we regard entitytype consistency as key feature to predict event mentions. We adopt this inference method to improve the traditional sentence-level event extraction system. Experiments show that we can get 8.6% gain in trigger (event) identification, and more than 11.8% gain for argument (role) classification in ACE event extraction. 1 Introduction The event extraction task in ACE (Automatic Content Extraction) evaluation involves three challenging issues: distinguishing events of different types, finding the participants of an event and determining the roles of the participants. The recent researches on the task show the availability of transductive inference, such as that of the following methods: cross-document, crosssentence and cross-event inferences. Transductive inference is a process to use the known instances to predict the attributes of unknown instances. As an example, given a target event, the cross-event inference can predict its type by well using the related events co-occurred with it within the same document. From the sentence: (1)He left the company. it is hard to tell whether it is a Transport event in ACE, which means that he left the place; or an End-Position event, which means that he retired from the company. But cross-event inference can use a related event “Then he went shopping” within the same document to identify it as a Transport event correctly. As the above example might suggest, the availability of transductive inference for event extraction relies heavily on the known evidences of an event occurrence in specific condition. However, the evidence supporting the inference is normally unclear or absent. For instance, the relation among events is the key clue for cross-event inference to predict a target event type, as shown in the inference process of the sentence (1). But event relation extraction itself is a hard task in Information Extraction. So cross-event inference often suffers from some false evidence (viz., misleading by unrelated events) or lack of valid evidence (viz., unsuccessfully extracting related events). In this paper, we propose a new method of transductive inference, named cross-entity inference, for event extraction by well using the relations among entities. This method is firstly motivated by the inherent ability of entity types in revealing event types. From the sentences: (2)He left the bathroom. (3)He left Microsoft. it is easy to identify the sentence (2) as a Transport event in ACE, which means that he left the place, because nobody would retire (End-Position type) from a bathroom. And compared to the entities in sentence (1) and (2), the entity “Microsoft” in (3) would give us more confidence to tag the “left” event as an End-Position type, because people are used to giving the full name of the place where they retired. The cross-entity inference is also motivated by the phenomenon that the entities of the same type often attend similar events. That gives us a way to predict event type based on entity-type consistency. From the sentence: (4)Obama beats McCain. it is hard to identify it as an Elect event in ACE, which means Obama wins the Presidential Election, 1127 or an Attack event, which means Obama roughs somebody up. But if we have the priori knowledge that the sentence “Bush beats McCain” is an Elect event, and “Obama” was a presidential contender just like “Bush” (strict type consistency), we have ample evidence to predict that the sentence (4) is also an Elect event. Indeed above cross-entity inference for eventtype identification is not the only use of entity-type consistency. As we shall describe below, we can make use of it at all issues of event extraction: y For event type: the entities of the same type are most likely to attend similar events. And the events often use consistent or synonymous trigger. y For event argument (participant): the entities of the same type normally co-occur with similar participants in the events of the same type. y For argument role: the arguments of the same type, for the most part, play the same roles in similar events. With the help of above characteristics of entity, we can perform a step-by-step inference in this order: y Step 1: predicting event type and labeling trigger given the entities of the same type. y Step 2: identifying arguments in certain event given priori entity type, event type and trigger that obtained by step 1. y Step 3: determining argument roles in certain event given entity type, event type, trigger and arguments that obtained by step 1 and step 2. On the basis, we give a blind cross-entity inference method for event extraction in this paper. In the method, we first regard entities as queries to retrieve their related documents from large-scale language resources, and use the global evidences of the documents to generate entity-type descriptions. Second we determine the type consistency of entities by measuring the similarity of the type descriptions. Finally, given the priori attributes of events in the training data, with the help of the entities of the same type, we perform the step-by-step cross-entity inference on the attributes of test events (candidate sentences). In contrast to other transductive inference methods on event extraction, the cross-entity inference makes every effort to strengthen effects of entities in predicting event occurrences. Thus the inferential process can benefit from following aspects: 1) less false evidence, viz. less false entity-type consistency (the key clue of cross-entity inference), because the consistency can be more precisely determined with the help of fully entity-type description that obtained based on the related information from Web; 2) more valid evidence, viz. more entities of the same type (the key references for the inference), because any entity never lack its congeners. 2 Task Description The event extraction task we addressing is that of the Automatic Content Extraction (ACE) evaluations, where an event is defined as a specific occurrence involving participants. And event extraction task requires that certain specified types of events that are mentioned in the source language data be detected. We first introduce some ACE terminology to understand this task more easily: y Entity: an object or a set of objects in one of the semantic categories of interest, referred to in the document by one or more (co-referential) entity mentions. y Entity mention: a reference to an entity (typically, a noun phrase). y Event trigger: the main word that most clearly expresses an event occurrence (An ACE event trigger is generally a verb or a noun). y Event arguments: the entity mentions that are involved in an event (viz., participants). y Argument roles: the relation of arguments to the event where they participate. y Event mention: a phrase or sentence within which an event is described, including trigger and arguments. The 2005 ACE evaluation had 8 types of events, with 33 subtypes; for the purpose of this paper, we will treat these simply as 33 separate event types and do not consider the hierarchical structure among them. Besides, the ACE evaluation plan defines the following standards to determine the correctness of an event extraction: y A trigger is correctly labeled if its event type and offset (viz., the position of the trigger word in text) match a reference trigger. y An argument is correctly identified if its event type and offsets match any of the reference argument mentions, in other word, correctly recognizing participants in an event. y An argument is correctly classified if its role matches any of the reference argument mentions. Consider the sentence: 1128 (5) It has refused in the last five years to revoke the license of a single doctor for committing medical errors.1 The event extractor should detect an EndPosition event mention, along with the trigger word “revoke”, the position “doctor”, the person whose license should be revoked, and the time during which the event happened: Event type End-Position Trigger revoke a single doctor Role=Person doctor Role=Position Arguments the last five years Role=Time-within Table 1: Event extraction example It is noteworthy that event extraction depends on previous phases like name identification, entity mention co-reference and classification. Thereinto, the name identification is another hard task in ACE evaluation and not the focus in this paper. So we skip the phase and instead directly use the entity labels provided by ACE. 3 Related Work Almost all the current ACE event extraction systems focus on processing one sentence at a time (Grishman et al., 2005; Ahn, 2006; Hardyet al. 2006). However, there have been several studies using high-level information from a wider scope: Maslennikov and Chua (2007) use discourse trees and local syntactic dependencies in a patternbased framework to incorporate wider context to refine the performance of relation extraction. They claimed that discourse information could filter noisy dependency paths as well as increasing the reliability of dependency path extraction. Finkel et al. (2005) used Gibbs sampling, a simple Monte Carlo method used to perform approximate inference in factored probabilistic models. By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference. They used this technique to augment an information extraction system with long-distance dependency models, enforcing label consistency and extraction template consistency constraints. Ji and Grishman (2008) were inspired from the hypothesis of “One Sense Per Discourse” (Ya 1 Selected from the file “CNN_CF_20030304.1900.02” in ACE-2005 corpus. rowsky, 1995); they extended the scope from a single document to a cluster of topic-related documents and employed a rule-based approach to propagate consistent trigger classification and event arguments across sentences and documents. Combining global evidence from related documents with local decisions, they obtained an appreciable improvement in both event and event argument identification. Patwardhan and Riloff (2009) proposed an event extraction model which consists of two components: a model for sentential event recognition, which offers a probabilistic assessment of whether a sentence is discussing a domain-relevant event; and a model for recognizing plausible role fillers, which identifies phrases as role fillers based upon the assumption that the surrounding context is discussing a relevant event. This unified probabilistic model allows the two components to jointly make decisions based upon both the local evidence surrounding each phrase and the “peripheral vision”. Gupta and Ji (2009) used cross-event information within ACE extraction, but only for recovering implicit time information for events. Liao and Grishman (2010) propose document level cross-event inference to improve event extraction. In contrast to Gupta’s work, Liao do not limit themselves to time information for events, but rather use related events and event-type consistency to make predictions or resolve ambiguities regarding a given event. 4 Motivation In event extraction, current transductive inference methods focus on the issue that many events are missing or spuriously tagged because the local information is not sufficient to make a confident decision. The solution is to mine credible evidences of event occurrences from global information and regard that as priori knowledge to predict unknown event attributes, such as that of cross-document and cross-event inference methods. However, by analyzing the sentence-level baseline event extraction, we found that the entities within a sentence, as the most important local information, actually contain sufficient clues for event detection. It is only based on the premise that we know the backgrounds of the entities beforehand. For instance, if we knew the entity “vesuvius” is an active volcano, we could easily identify 1129 the word “erupt”, which co-occurred with the entity, as the trigger of a “volcanic eruption” event but not that of a “spotty rash”. In spite of that, it is actually difficult to use an entity to directly infer an event occurrence because we normally don’t know the inevitable connection between the background of the entity and the event attributes. But we can well use the entities of the same background to perform the inference. In detail, if we first know entity(a) has the same background with entity(b), and we also know that entity(a), as a certain role, participates in a specific event, then we can predict that entity(b) might participtes in a similar event as the same role. Consider the two sentences2 from ACE corpus: (5) American case for war against Saddam. (6) Bush should torture the al Qaeda chief operations officer. The sentences are two event mentions which have the same attributes: Event type Attack Trigger war American Role=Attacker (5) Arguments Saddam Role=Target Event type Attack Trigger torture Bush Role=Attacker (6) Arguments ...Qaeda chief ... Role=Target Table 2: Cross-entity inference example From the sentences, we can find that the entities “Saddam” and “Qaeda chief” have the same background (viz., terrorist leader), and they are both the arguments of Attack events as the role of Target. So if we previously know any of the event mentions, we can infer another one with the help of the entities of the same background. In a word, the cross-entity inference, we proposed for event extraction, bases on the hypothesis: Entities of the consistent type normally participate in similar events as the same role. As we will introduce below, some statistical data from ACE training corpus can support the hypothesis, which show the consistency of event type and role in event mentions where entities of the same type occur. 4.1 Entity Consistency and Distribution Within the ACE corpus, there is a strong entity consistency: if one entity mention appears in a type 2 They are extracted from the files “CNN_CF_20030305.1900. 00-1” and “CNN_CF_20030303.1900.06-1” respectively. of event, other entity mentions of the same type will appear in similar events, and even use the same word to trigger the events. To see this we calculated the conditional probability (in the ACE corpus) of a certain entity type appearing in the 33 ACE event subtypes. 0 50 100 150 200 250 Be‐Born Marry Divorce Injure Die Transport Transfer‐ Transfer‐ Start‐Org Merge‐ Declare‐ End‐Org Attack Demonstr Meet Phone‐ Start‐ End‐ Nominate Elect Arrest‐Jail Release‐ Trial‐ Charge‐ Sue Convict Sentence Fine Execute Extradite Acquit Appeal Pardon Event type Frequency Population‐Center Exploding Air Figure 1. Conditional probability of a certain entity type appearing in the 33 ACE event subtypes (Here only the probabilities of Population-Center, Exploding and Air entities as examples) 0 50 100 150 200 250 Person Place Buyer Seller Beneficiary Price Artifact Origin Destination Giver Recipient Money Org Agent Victim Instrument Entity Attacker Target Defendant Adjudicator Prosecutor Plaintiff Crime Position Sentence Vehicle Time‐After Time‐Before Time‐At‐ Time‐At‐End Time‐ Time‐ Time‐Holds Time‐ Role Frequency Population‐Center Exploding Air Figure 2. Conditional probability of an entity type appearing as the 34 ACE role types (Here only the probabilities of Population-Center, Exploding and Air entities as examples) As there are 33 event subtypes and 43 entity types, there are potentially 33*43=1419 entityevent combinations. However, only a few of these appear with substantial frequency. For example, the Population-Center entities only occur in 4 types of event mentions with the conditional probability more than 0.05. From Table 3, we can find that only Attack and Transport events co-occur frequently with Population-Center entities (see Figure 1 and Table 3). Event Cond.Prob. Freq. Transport 0.368 197 Attack 0.295 158 Meet 0.073 39 Die 0.069 37 Table 3: Events co-occurring with PopulationCenter with the conditional probability > 0.05 Actually we find that most entity types appear in more restricted event mentions than PopulationCenter entity. For example, Air entity only cooccurs with 5 event types (Attack, Transport, Die, Transfer-Ownership and Injure), and Exploding 1130 entity co-occurs with 4 event types (see Figure 1). Especially, they only co-occur with one or two event types with the conditional probability more than 0.05. Evnt.<=5 5<Evnt.<=10 Evnt.>10 Freq. > 0 24 7 12 Freq. >10 37 4 2 Freq. >50 41 1 1 Table 4: Distribution of entity-event combination corresponding to different co-occurrence frequency Table 4 gives the distributions of whole ACE entity types co-occurring with event types. We can find that there are 37 types of entities (out of 43 in total) appearing in less than 5 types of event mentions when entity-event co-occurrence frequency is larger than 10, and only 2 (e.g. Individual) appearing in more than 10 event types. And when the frequency is larger than 50, there are 41 (95%) entity types co-occurring with less than 5 event types. These distributions show the fact that most instances of a certain entity type normally participate in events of the same type. And the distributions might be good predictors for event type detection and trigger determination. Air (Entity type) Attack event Fighter plane (subtype 1): “MiGs” “enemy planes” “warplanes” “allied aircraft” “U.S. jets” “a-10 tank killer” “b-1 bomber” “a-10 warthog” “f-14 aircraft” “apache helicopter” Spacecraft (subtype 2): “russian soyuz capsule” “soyuz” Civil aviation (subtype 3): “airliners” “the airport” “Hooters Air executive” Transport event Private plane (subtype 4): “Marine One” “commercial flight” “private plane” Table 5: Event types co-occurred with Air entities Besides, an ACE entity type actually can be divided into more cohesive subtypes according to similarity of background of entity, and such a subtype nearly always co-occur with unique event type. For example, the Air entities can be roughly divided into 4 subtypes: Fighter plane, Spacecraft, Civil aviation and Private plane, within which the Fighter plane entities all appear in Attack event mentions, and other three subtypes all co-occur with Transport events (see Table 5). This consistency of entities in a subtype is helpful to improve the precision of the event type predictor. 4.2 Role Consistency and Distribution The same thing happens for entity-role combinations: entities of the same type normally play the same role, especially in the event mentions of the same type. For example, the Population-Center entities occur in ACE corpus as only 4 role types: Place, Destination, Origin and Entity respectively with conditional probability 0.615, 0.289, 0.093, 0.002 (see Figure 2). And They mainly appear in Transport event mentions as Place, and in Attack as Destination. Particularly the Exploding entities only occur as Instrument and Artifact respectively with the probability 0.986 and 0.014. They almost entirely appear in Attack events as Instrument. Evnt.<=5 5<Evnt.<=10 Evnt.>10 Freq. > 0 32 5 6 Freq. >10 38 3 2 Freq. >50 42 1 0 Table 6: Distribution of entity-role combination corresponding to different co-occurrence frequency Table 6 gives the distributions of whole entityrole combinations in ACE corpus. We can find that there are 38 entity types (out of 43 in total) occur as less than 5 role types when the entity-role cooccurrence frequency is larger than 10. There are 42 (98%) when the frequency is larger than 50, and only 2 (e.g. Individual) when larger than 10. The distributions show that the instances of an entity type normally occur as consistent role, which is helpful for cross-entity inference to predict roles. 5 Cross-entity Approach In this section we present our approach to using blind cross-entity inference to improve sentencelevel ACE event extraction. Our event extraction system extracts events independently for each sentence, because the definition of event mention constrains them to appear in the same sentence. Every sentence that at least involves one entity mention will be regarded as a candidate event mention, and a randomly selected entity mention from the candidate will be the staring of the whole extraction process. For the entity mention, information retrieval is used to mine its background knowledge from Web, and its type is determined by comparing the knowledge with those in training corpus. Based on the entity type, the extraction system performs our step-by-step cross-entity inference to predict the attributes of 1131 the candidate event mention: trigger, event type, arguments, roles and whether or not being an event mention. The main frame of our event extraction system is shown in Figure 3, which includes both training and testing processes. Figure 3. The frame of cross-entity inference for event extraction (including training and testing processes) In the training process, for every entity type in the ACE training corpus, a clustering technique (CLUTO toolkit)3 is used to divide it into different cohesive subtypes, each of which only contains the entities of the same background. For instance, the Air entities will be divided into Fighter plane, Spacecraft, Civil aviation, Private plane, etc (see Table 5). And for each subtype, we mine event mentions where this type of entities appear from ACE training corpus, and extract all the words which trigger the events to establish corresponding trigger list. Besides, a set of support vector machine (SVM) based classifiers are also trained: y Argument Classifier: to distinguish arguments of a potential trigger from non-arguments4; y Role Classifier: to classify arguments by argument role; y Reportable-Event Classifier (Trigger Classifier): Given entity types, a potential trigger, an event type, and a set of arguments, to determine whether there is a reportable event mention. 3http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=h tml&identifier=ADA439508 4 It is noteworthy that a sentence may include more than one event (more than one trigger). So it is necessary to distinguish arguments of a potential trigger from that of others. In the test process, for each candidate event mention, our event extraction system firstly predicts its triggers and event types: given an randomly selected entity mention from the candidate, the system determines the entity subtype it belonging to and the corresponding trigger list, and then all non-entity words in the candidate are scanned for a instance of triggers from the list. When an instance is found, the system tags the candidate as the event type that the most frequently co-occurs with the entity subtype in the events that triggered by the instance. Secondly the argument classifier is applied to the remaining mentions in the candidate; for any argument passing that classifier, the role classifier is used to assign a role to it. Finally, once all arguments have been assigned, the reportableevent classifier is applied to the candidate; if the result is successful, this event mention is reported. 5.1 Further Division of Entity Type One of the most important pretreatments before our blind cross-entity inference is to divide the ACE entity type into more cohesive subtype. The greater consistency among backgrounds of entities in such a subtype might be good to improve the precision of cross-entity inference. 1132 For each ACE entity type, we collect all entity mentions of the type from training corpus, and regard each such mention as a query to retrieve the 50 most relevant documents from Web. Then we select 50 key words that the most weighted by TFIDF in the documents to roughly describe background of entity. After establishing the vector space model (VSM) for each entity mention of the type, we adopt a clustering toolkit (CLUTO) to further divide the mentions into different subtypes. Finally, for each subtype, we describe its centroid by using 100 key words which the most frequently occurred in relevant documents of entities of the subtype. In the test process, for an entity mention in a candidate event mention, we determine its type by comparing its background against all centroids of subtypes in training corpus, and the subtype whose centroid has the most Cosine similarity with the background will be assigned to the entity. It is noteworthy that global information from the Web is only used to measure the entity-background consistency and not directly in the inference process. Thus our event extraction system actually still performs a sentence-level inference based on local information. 5.2 Cross-Entity Inference Our event extraction system adopts a step-bystep cross-entity inference to predict event. As discussed above, the first step is to determine the trigger in a candidate event mention and tag its event type based on consistency of entity type. Given the domain of event mention that restrained by the known trigger, event type and entity subtype, the second step is to distinguish the most probable arguments that co-occurring in the domain from the non-arguments. Then for each of the arguments, the third step can use the co-occurring arguments in the domain as important contexts to predict its role. Finally, the inference process determines whether the candidate is a reportable event mention according to a confidence coefficient. In the following sections, we focus on introducing the three classifiers: argument classifier, role classifier and reportable-event classifier. 5.2.1 Cross-Entity Argument Classifier For a candidate event mention, the first step gives its event type, which roughly restrains the domain of event mentions where the arguments of the candidate might co-occur. On the basis, given an entity mention in the candidate and its type (see the pretreatment process in section 5.1), the argument classifier could predict whether other entity mentions co-occur with it in such a domain, if yes, all the mentions will be the arguments of the candidate. In other words, if we know an entity of a certain type participates in some event, we will think of what entities also should participate in the event. For instance, when we know a defendant goes on trial, we can conclude that the judge, lawyer and witness should appear in court. Argument Classifier Feature 1: an event type (an event-mention domain) Feature 2: an entity subtype Feature 3: entity-subtype co-occurrence in domain Feature 4: distance to trigger Feature 5: distances to other arguments Feature 6: co-occurrence with trigger in clause Role Classifier Feature 1 and Feature 2 Feature 7: entity-subtypes of arguments Reportable-Event Classifier Feature 1 Feature 8: confidence coefficient of trigger in domain Feature 9: confidence coefficient of role in domain Table 7: Features selected for SVM-based crossentity classifiers A SVM-based argument classifier is used to determine arguments of candidate event mention. Each feature of this classifier is the conjunction of: y The subtype of an entity y The event type we are trying to assign an argument to y A binary indicator of whether this entity subtype co-occurs with other subtypes in such an event type (There are 266 entity subtypes, and so 266 features for each instance) Some minor features, such as another binary indicator of whether arguments co-occur with trigger in the same clause (see Table 7). 5.2.2 Cross-Entity Role Classifier For a candidate event mention, the arguments that given by the second step (argument classifier) provide important contextual information for predicting what role the local entity (also one of the arguments) takes on. For instance, when citizens (Arg1) co-occur with terrorist (Arg2), most likely the role of Arg1 is Victim. On the basis, with the help of event type, the prediction might be more 1133 precise. For instance, if the Arg1 and Arg2 cooccur in an Attack event mention, we will have more confidence in the Victim role of Arg1. Besides, as discussed in section 4, entities of the same type normally take on the same role in similar events, especially when they co-occur with similar arguments in the events (see Table 2). Therefore, all instances of co-occurrence model {entity subtype, event type, arguments} in training corpus could provide effective evidences for predicting the role of argument in the candidate event mention. Based on this, we trained a SVM-based role classifier which uses following features: y Feature 1 and Feature 2 (see Table 7) y Given the event domain that restrained by the entity and event types, an indicator of what subtypes of arguments appear in the domain. (266 entity subtypes make 266 features for each instance) 5.2.3 Reportable-Event Classifier At this point, there are still two issues need to be resolved. First, some triggers are common words which often mislead the extraction of candidate event mention, such as “it”, “this”, “what”, etc. These words only appear in a few event mentions as trigger, but when they once appear in trigger list, a large quantity of noisy sentences will be regarded as candidates because of their commonness in sentences. Second, some arguments might be tagged as more than one role in specific event mentions, but as ACE event guideline, one argument only takes on one role in a sentence. So we need to remove those with low confidence. A confidence coefficient is used to distinguish the correct triggers and roles from wrong ones. The coefficient calculate the frequency of a trigger (or a role) appearing in specific domain of event mentions and that in whole training corpus, then combines them to represent its confidence degree, just like TFIDF algorithm. Thus, the more typical triggers (or roles) will be given high confidence. Based on the coefficient, we use a SVM-based classifier to determine the reportable events. Each feature of this classifier is the conjunction of: y An event type (domain of event mentions) y Confidence coefficients of triggers in domain y Confidence coefficients of roles in the domain. 6 Experiments We followed Liao (2010)’s evaluation and randomly select 10 newswire texts from the ACE 2005 training corpus as our development set, which is used for parameter tuning, and then conduct a blind test on a separate set of 40 ACE 2005 newswire texts. We use the rest of the ACE training corpus (549 documents) as training data for our event extraction system. To compare with the reported work on crossevent inference (Liao, 2010) and its sentence-level baseline system, we cross-validate our method on 10 separate sets of 40 ACE texts, and report the optimum, worst and mean performances (see Table 8) on the data by using Precision (P), Recall (R) and F-measure (F). In addition, we also report the performance of two human annotators on 40 ACE newswire texts (a random blind test set): one knows the rules of event extraction; the other knows nothing about it. 6.1 Main Results From the results presented in Table 8, we can see that using the cross-entity inference, we can improve the F score of sentence-level event extraction for trigger classification by 8.59%, argument classification by 11.86%, and role classification by 11.9% (mean performance). Compared to the cross-event inference, we gains 2.87% improvement for argument classification, and 3.81% for role classification (mean performance). Especially, our worst results also have better performances than cross-event inference. Nonetheless, the cross-entity inference has worse F score for trigger determination. As we can see, the low Recall score weaken its F score (see Table 8). Actually, we select the sentence which at least includes one entity mention as candidate event mention, but lots of event mentions in ACE never include any entity mention. Thus we have missed some mentions at the starting of inference process. In addition, the annotator who knows the rules of event extraction has a similar performance trend with systems: high for trigger classification, middle for argument classification, and low for role classification (see Table 8). But the annotator who never works in this field obtains a different trend: higher performance for argument classification. This phenomenon might prove that the step-bystep inference is not the only way to predicate event mention because human can determine arguments without considering triggers and event types. 1134 Performance System/Human Trigger (%) Argument (%) Role (%) P R F P R F P R F Sentence-level baseline 67.56 53.54 59.74 46.45 37.15 41.29 41.02 32.81 36.46 Cross-event inference 68.71 68.87 68.79 50.85 49.72 50.28 45.06 44.05 44.55 Cross-entity inference (optimum) 73.4 66.2 69.61 56.96 55.1 56 49.3 46.59 47.9 Cross-entity inference (worst) 71.3 64.17 66.1 51.28 50.3 50.78 46.3 44.3 45.28 Cross-entity inference (mean) 72.9 64.3 68.33 53.4 52.9 53.15 51.6 45.5 48.36 Human annotation 1 (blind) 58.9 59.1 59.0 62.6 65.9 64.2 50.3 57.69 53.74 Human annotation 2 (know rules) 74.3 76.2 75.24 68.5 75.8 71.97 61.3 68.8 64.86 Table 8: Overall performance on blind test data 6.2 Influence of Clustering on Inference A main part of our blind inference system is the entity-type consistency detection, which relies heavily on the correctness of entity clustering and similarity measurement. In training, we used CLUTO clustering toolkit to automatically generate different types of entities based on their background-similarities. In testing, we use K-nearest neighbor algorithm to determine entity type. Fighter plane (subtype 1 in Air entities): “warplanes” “allied aircraft” “U.S. jets” “a-10 tank killer” “b-1 bomber” “a-10 warthog” “f-14 aircraft” “apache helicopter” “terrorist” “Saddam” “Saddam Hussein” “Baghdad”… Table 9: Noises in subtype 1 of “Air” entities (The blod fonts are noises) We obtained 129 entity subtypes from training set. By randomly inspecting 10 subtypes, we found nearly every subtype involves no less than 19.2% noises. For example, the subtype 1 of “Air” in Table 5 lost the entities of “MiGs” and “enemy planes”, but involved “terrorist”, “Saddam”, etc (See Table 9). Therefore, we manually clustered the subtypes and retry the step-by-step cross-entity inference. The results (denoted as “Visible 1”) are shown in Table 10, within which, we additionally show the performance of the inference on the rough entity types provided by ACE (denoted as “Visible 2”), such as the type of “Air”, “Population-Center”, “Exploding”, etc., which normally can be divided into different more cohesive subtypes. And the “Blind” in Table 10 denotes the performances on our subtypes obtained by CLUTO. It is surprised that the performances (see Table 10, F-score) on “Visible 1” entity subtypes are just a little better than “Blind” inference. So it seems that the noises in our blind entity types (CLUTO clusters) don’t hurt the inference much. But by reinspecting the “Visible 1” subtypes, we found that their granularities are not enough small: the 89 manual entity clusters actually can be divided into more cohesive subtypes. So the improvements of inference on noise-free “Visible 1” subtypes are partly offset by loss on weakly consistent entities in the subtypes. It can be proved by the poor performances on “Visible 2” subtypes which are much more general than “Visible 1”. Therefore, a reasonable clustering method is important in our inference process. F-score Trigger Argument Role Blind 68.33 53.15 48.36 Visible 1 69.15 53.65 48.83 Visible 2 51.34 43.40 39.95 Table 10: Performances on visible VS blind 7 Conclusions and Future Work We propose a blind cross-entity inference method for event extraction, which well uses the consistency of entity mention to achieve sentence-level trigger and argument (role) classification. Experiments show that the method has better performance than cross-document and cross-event inferences in ACE event extraction. The inference presented here only considers the helpfulness of entity types of arguments to role classification. But as a superior feature, contextual roles can provide more effective assistance to role determination of local argument. For instance, when an Attack argument appears in a sentence, a Target might be there. So if we firstly identify simple roles, such as the condition that an argument has only a single role, and then use the roles as priori knowledge to classify hard ones, may be able to further improve performance. Acknowledgments We thank Ruifang He. And we acknowledge the support of the National Natural Science Foundation of China under Grant Nos. 61003152, 60970057, 90920004. 1135 References David Ahn. 2006. The stages of event extraction. In Proc. COLING/ACL 2006 Workshop on Annotating and Reasoning about Time and Events.Sydney, Australia. Jenny Rose Finkel, Trond Grenager and Christopher Manning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. In Proc. 43rd Annual Meeting of the Association for Computational Linguistics, pages 363–370, Ann Arbor, MI, June. Prashant Gupta and Heng Ji. 2009. Predicting Unknown Time Arguments based on Cross-Event Propagation. In Proc. ACL-IJCNLP 2009. Ralph Grishman, David Westbrook and Adam Meyers. 2005. NYU’s English ACE 2005 System Description. In Proc. ACE 2005 Evaluation Workshop, Gaithersburg, MD. Hilda Hardy, Vika Kanchakouskaya and Tomek Strzalkowski. 2006. Automatic Event Classification Using Surface Text Features. In Proc. AAAI06 Workshop on Event Extraction and Synthesis. Boston, MA. Heng Ji and Ralph Grishman. 2008. Refining Event Extraction through Cross-Document Inference. In Proc. ACL-08: HLT, pages 254–262, Columbus, OH, June. Shasha Liao and Ralph Grishman. 2010. Using Document Level Cross-Event Inference to Improve Event Extraction. In Proc. ACL-2010, pages 789-797, Uppsala, Sweden, July. Mstislav Maslennikov and Tat-Seng Chua. 2007. A Multi resolution Framework for Information Extraction from Free Text. In Proc. 45th Annual Meeting of the Association of Computational Linguistics, pages 592–599, Prague, Czech Republic, June. Siddharth Patwardhan and Ellen Riloff. 2007. Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions. In Proc. Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, 2007, pages 717–727, Prague, Czech Republic, June. Siddharth Patwardhan and Ellen Riloff. 2009. A Unified Model of Phrasal and Sentential Evidence for Information Extraction. In Proc. Conference on Empirical Methods in Natural Language Processing 2009, (EMNLP-09). David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proc. ACL 1995. Cambridge, MA. 1136
2011
113
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1137–1147, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Peeling Back the Layers: Detecting Event Role Fillers in Secondary Contexts Ruihong Huang and Ellen Riloff School of Computing University of Utah Salt Lake City, UT 84112 {huangrh,riloff}@cs.utah.edu Abstract The goal of our research is to improve event extraction by learning to identify secondary role filler contexts in the absence of event keywords. We propose a multilayered event extraction architecture that progressively “zooms in” on relevant information. Our extraction model includes a document genre classifier to recognize event narratives, two types of sentence classifiers, and noun phrase classifiers to extract role fillers. These modules are organized as a pipeline to gradually zero in on event-related information. We present results on the MUC-4 event extraction data set and show that this model performs better than previous systems. 1 Introduction Event extraction is an information extraction (IE) task that involves identifying the role fillers for events in a particular domain. For example, the Message Understanding Conferences (MUCs) challenged NLP researchers to create event extraction systems for domains such as terrorism (e.g., to identify the perpetrators, victims, and targets of terrorism events) and management succession (e.g., to identify the people and companies involved in corporate management changes). Most event extraction systems use either a learning-based classifier to label words as role fillers, or lexico-syntactic patterns to extract role fillers from pattern contexts. Both approaches, however, generally tackle event recognition and role filler extraction at the same time. In other words, most event extraction systems primarily recognize contexts that explicitly refer to a relevant event. For example, a system that extracts information about murders will recognize expressions associated with murder (e.g., “killed”, “assassinated”, or “shot to death”) and extract role fillers from the surrounding context. But many role fillers occur in contexts that do not explicitly mention the event, and those fillers are often overlooked. For example, the perpetrator of a murder may be mentioned in the context of an arrest, an eyewitness report, or speculation about possible suspects. Victims may be named in sentences that discuss the aftermath of the event, such as the identification of bodies, transportation of the injured to a hospital, or conclusions drawn from an investigation. We will refer to these types of sentences as “secondary contexts” because they are generally not part of the main event description. Discourse analysis is one option to explicitly link these secondary contexts to the event, but discourse modelling is itself a difficult problem. The goal of our research is to improve event extraction by learning to identify secondary role filler contexts in the absence of event keywords. We create a set of classifiers to recognize role-specific contexts that suggest the presence of a likely role filler regardless of whether a relevant event is mentioned or not. For example, our model should recognize that a sentence describing an arrest probably includes a reference to a perpetrator, even though the crime itself is reported elsewhere. Extracting information from these secondary contexts can be risky, however, unless we know that the larger context is discussing a relevant event. To 1137 address this, we adopt a two-pronged strategy for event extraction that handles event narrative documents differently from other documents. We define an event narrative as an article whose main purpose is to report the details of an event. We apply the rolespecific sentence classifiers only to event narratives to aggressively search for role fillers in these stories. However, other types of documents can mention relevant events too. The MUC-4 corpus, for example, includes interviews, speeches, and terrorist propaganda that contain information about terrorist events. We will refer to these documents as fleeting reference texts because they mention a relevant event somewhere in the document, albeit briefly. To ensure that relevant information is extracted from all documents, we also apply a conservative extraction process to every document to extract facts from explicit event sentences. Our complete event extraction model, called TIER, incorporates both document genre and rolespecific context recognition into 3 layers of analysis: document analysis, sentence analysis, and noun phrase (NP) analysis. At the top level, we train a text genre classifier to identify event narrative documents. At the middle level, we create two types of sentence classifiers. Event sentence classifiers identify sentences that are associated with relevant events, and role-specific context classifiers identify sentences that contain possible role fillers irrespective of whether an event is mentioned. At the lowest level, we use role filler extractors to label individual noun phrases as role fillers. As documents pass through the pipeline, they are analyzed at different levels of granularity. All documents pass through the event sentence classifier, and event sentences are given to the role filler extractors. Documents identified as event narratives additionally pass through role-specific sentence classifiers, and the role-specific sentences are also given to the role filler extractors. This multi-layered approach creates an event extraction system that can discover role fillers in a variety of different contexts, while maintaining good precision. In the following sections, we position our research with respect to related work, present the details of our multi-layered event extraction model, and show experimental results for five event roles using the MUC-4 data set. 2 Related Work Some event extraction data sets only include documents that describe relevant events (e.g., wellknown data sets for the domains of corporate acquisitions (Freitag, 1998b; Freitag and McCallum, 2000; Finn and Kushmerick, 2004), job postings (Califf and Mooney, 2003; Freitag and McCallum, 2000), and seminar announcements (Freitag, 1998b; Ciravegna, 2001; Chieu and Ng, 2002; Finn and Kushmerick, 2004; Gu and Cercone, 2006). But many IE data sets present a more realistic task where the IE system must determine whether a relevant event is present in the document, and if so, extract its role fillers. Most of the Message Understanding Conference data sets represent this type of event extraction task, containing (roughly) a 50/50 mix of relevant and irrelevant documents (e.g., MUC-3, MUC-4, MUC-6, and MUC-7 (Hirschman, 1998)). Our research focuses on this setting where the event extraction system is not assured of getting only relevant documents to process. Most event extraction models can be characterized as either pattern-based or classifier-based approaches. Early event extraction systems used handcrafted patterns (e.g., (Appelt et al., 1993; Lehnert et al., 1991)), but more recent systems generate patterns or rules automatically using supervised learning (e.g., (Kim and Moldovan, 1993; Riloff, 1993; Soderland et al., 1995; Huffman, 1996; Freitag, 1998b; Ciravegna, 2001; Califf and Mooney, 2003)), weakly supervised learning (e.g., (Riloff, 1996; Riloff and Jones, 1999; Yangarber et al., 2000; Sudo et al., 2003; Stevenson and Greenwood, 2005)), or unsupervised learning (e.g., (Shinyama and Sekine, 2006; Sekine, 2006)). In addition, many classifiers have been created to sequentially label event role fillers in a sentence (e.g., (Freitag, 1998a; Chieu and Ng, 2002; Finn and Kushmerick, 2004; Li et al., 2005; Yu et al., 2005)). Research has also been done on relation extraction (e.g., (Roth and Yih, 2001; Zelenko et al., 2003; Bunescu and Mooney, 2007)), but that task is different from event extraction because it focuses on isolated relations rather than template-based event analysis. Most event extraction systems scan a text and search small context windows using patterns or a classifier. However, recent work has begun to ex1138 Figure 1: TIER: A Multi-Layered Architecture for Event Extraction plore more global approaches. (Maslennikov and Chua, 2007) use discourse trees and local syntactic dependencies in a pattern-based framework to incorporate wider context. Ji and Grishman (2008) enforce event role consistency across different documents. (Liao and Grishman, 2010) use cross-event inference to help with the extraction of role fillers shared across events. And there have been several recent IE models that explore the idea of identifying relevant sentences to gain a wider contextual view and then extracting role fillers. (Gu and Cercone, 2006) created HMMs to first identify relevant sentences, but their research focused on eliminating redundant extractions and worked with seminar announcements, where the system was only given relevant documents. (Patwardhan and Riloff, 2007) developed a system that learns to recognize event sentences and uses patterns that have a semantic affinity for an event role to extract role fillers. GLACIER (Patwardhan and Riloff, 2009) jointly considers sentential evidence and phrasal evidence in a unified probabilistic framework. Our research follows in the same spirit as these approaches by performing multiple levels of text analysis. But our event extraction model includes two novel contributions: (1) we develop a set of role-specific sentence classifiers to learn to recognize secondary contexts associated with each type of event role , and (2) we exploit text genre to incorporate a third level of analysis that enables the system to aggressively hunt for role fillers in documents that are event narratives. In Section 5, we compare the performance of our model with both the GLACIER system and Patwardhan & Riloff’s semantic affinity model. 3 A Multi-Layered Approach to Event Extraction The main idea behind our approach is to analyze documents at multiple levels of granularity in order to identify role fillers that occur in different types of contexts. Our event extraction model progressively “zooms in” on relevant information by first identifying the document type, then identifying sentences that are likely to contain relevant information, and finally analyzing individual noun phrases to identify role fillers. The key advantage of this architecture is that it allows us to search for information using two different principles: (1) we look for contexts that directly refer to the event, as per most traditional event extraction systems, and (2) we look for secondary contexts that are often associated with a specific type of role filler. Identifying these role-specific contexts can root out important facts would have been otherwise missed. Figure 1 shows the multi-layered pipeline of our event extraction system. An important aspect of our model is that two different strategies are employed to handle documents of different types. The event extraction task is to find any description of a relevant event, even if the event is not the topic of the article.1 Consequently, all documents are given to the event sentence recognizers and their mission is to identify any sentence that mentions a relevant event. This path through the pipeline is conservative because information is extracted only from event sentences, but all documents are processed, including stories that contain only a fleeting reference to a relevant event. 1Per the MUC-4 task definition (MUC-4 Proceedings, 1992). 1139 The second path through the pipeline performs additional processing for documents that belong to the event narrative text genre. For event narratives, we assume that most of the document discusses a relevant event so we can more aggressively hunt for event-related information in secondary contexts. In this section, we explain how we create the two types of sentence classifiers and the role filler extractors. We will return to the issue of document genre and the event narrative classifier in Section 4. 3.1 Sentence Classification We have argued that event role fillers commonly occur in two types of contexts: event contexts and role-specific secondary contexts. For the purposes of this research, we use sentences as our definition of a “context”, although there are obviously many other possible definitions. An event context is a sentence that describes the actual event. A secondary context is a sentence that provides information related to an event but in the context of other activities that precede or follow the event. For both types of classifiers, we use exactly the same feature set, but we train them in different ways. The MUC-4 corpus used in our experiments includes a training set consisting of documents and answer keys. Each document that describes a relevant event has answer key templates with the role fillers (answer key strings) for each event. To train the event sentence recognizer, we consider a sentence to be a positive training instance if it contains one or more answer key strings from any of the event roles. This produced 3,092 positive training sentences. All remaining sentences that do not contain any answer key strings are used as negative instances. This produced 19,313 negative training sentences, yielding a roughly 6:1 ratio of negative to positive instances. There is no guarantee that a classifier trained in this way will identify event sentences, but our hypothesis was that training across all of the event roles together would produce a classifier that learns to recognize general event contexts. This approach was also used to train GLACIER’s sentential event recognizer (Patwardhan and Riloff, 2009), and they demonstrated that this approach worked reasonably well when compared to training with event sentences labelled by human judges. The main contribution of our work is introducing additional role-specific sentence classifiers to seek out role fillers that appear in less obvious secondary contexts. We train a set of role-specific sentence classifiers, one for each type of event role. Every sentence that contains a role filler of the appropriate type is used as a positive training instance. Sentences that do not contain any answer key strings are negative instances.2 In this way, we force each classifier to focus on the contexts specific to its particular event role. We expect the role-specific sentence classifiers to find some secondary contexts that the event sentence classifier will miss, although some sentences may be classified as both. Using all possible negative instances would produce an extremely skewed ratio of negative to positive instances. To control the skew and keep the training set-up consistent with the event sentence classifier, we randomly choose from the negative instances to produce a 6:1 ratio of negative to positive instances. Both types of classifiers use an SVM model created with SVMlin (Keerthi and DeCoste, 2005), and exactly the same features. The feature set consists of the unigrams and bigrams that appear in the training texts, the semantic class of each noun phrase3, plus a few additional features to represent the tense of the main verb phrase in the sentence and whether the document is long (> 35 words) or short (< 5 words). All of the feature values are binary. 3.2 Role Filler Extractors Our extraction model also includes a set of role filler extractors, one per event role. Each extractor receives a sentence as input and determines which noun phrases (NPs) in the sentence are fillers for the event role. To train an SVM classifier, noun phrases corresponding to answer key strings for the event role are positive instances. We randomly choose among all noun phrases that are not in the answer keys to create a 10:1 ratio of negative to positive instances. 2We intentionally do not use sentences that contain fillers for competing event roles as negative instances because sentences often contain multiple role fillers of different types (e.g., a weapon may be found near a body). Sentences without any role fillers are certain to be irrelevant contexts. 3We used the Sundance parser (Riloff and Phillips, 2004) to identify noun phrases and assign semantic class labels. 1140 The feature set for the role filler extractors is much richer than that of the sentence classifiers because they must carefully consider the local context surrounding a noun phrase. We will refer to the noun phrase being labelled as the targeted NP. The role filler extractors use three types of features: Lexical features: we represent four words to the left and four words to the right of the targeted NP, as well as the head noun and modifiers (adjectives and noun modifiers) of the targeted NP itself. Lexico-syntactic patterns: we use the AutoSlog pattern generator (Riloff, 1993) to automatically create lexico-syntactic patterns around each noun phrase in the sentence. These patterns are similar to dependency relations in that they typically represent the syntactic role of the NP with respect to other constituents (e.g., subject-of, object-of, and noun arguments). Semantic features: we use the Stanford NER tagger (Finkel et al., 2005) to determine if the targeted NP is a named entity, and we use the Sundance parser (Riloff and Phillips, 2004) to assign semantic class labels to each NP’s head noun. 4 Event Narrative Document Classification One of our goals was to explore the use of document genre to permit more aggressive strategies for extracting role fillers. In this section, we first present an analysis of the MUC-4 data set which reveals the distribution of event narratives in the corpus, and then explain how we train a classifier to automatically identify event narrative stories. 4.1 Manual Analysis We define an event narrative as an article whose main focus is on reporting the details of an event. For the purposes of this research, we are only concerned with events that are relevant to the event extraction task (i.e., terrorism). An irrelevant document is an article that does not mention any relevant events. In between these extremes is another category of documents that briefly mention a relevant event, but the event is not the focus of the article. We will refer to these documents as fleeting reference documents. Many of the fleeting reference documents in the MUC-4 corpus are transcripts of interviews, speeches, or terrorist propaganda communiques that refer to a terrorist event and mention at least one role filler, but within a discussion about a different topic (e.g., the political ramifications of a terrorist incident). To gain a better understanding of how we might create a system to automatically distinguish event narrative documents from fleeting reference documents, we manually labelled the 116 relevant documents in our tuning set. This was an informal study solely to help us understand the nature of these texts. # of Event # of Fleeting Narratives Ref. Docs Acc Gold Standard 54 62 Heuristics 40 55 .82 Table 1: Manual Analysis of Document Types The first row of Table 1 shows the distribution of event narratives and fleeting references based on our “gold standard” manual annotations. We see that more than half of the relevant documents (62/116) are not focused on reporting a terrorist event, even though they contain information about a terrorist event somewhere in the document. 4.2 Heuristics for Event Narrative Identification Our goal is to train a document classifier to automatically identify event narratives. The MUC-4 answer keys reveal which documents are relevant and irrelevant with respect to the terrorism domain, but they do not tell us which relevant documents are event narratives and which are fleeting reference stories. Based on our manual analysis of the tuning set, we developed several heuristics to help separate them. We observed two types of clues: the location of the relevant information, and the density of relevant information. First, we noticed that event narratives tend to mention relevant information within the first several sentences, whereas fleeting reference texts usually mention relevant information only in the middle or end of the document. Therefore our first heuristic requires that an event narrative mention a role filler within the first 7 sentences. Second, event narratives generally have a higher density of relevant information. We use several criteria to estimate information density because a single criterion was inadequate to cover different sce1141 narios. For example, some documents mention role fillers throughout the document. Other documents contain a high concentration of role fillers in some parts of the document but no role fillers in other parts. We developed three density heuristics to account for different situations. All of these heuristics count distinct role fillers. The first density heuristic requires that more than 50% of the sentences contain at least one role filler (|RelSents| |AllSents| > 0.5) . Figure 2 shows histograms for different values of this ratio in the event narrative (a) vs. the fleeting reference documents (b). The histograms clearly show that documents with a high (> 50%) ratio are almost always event narratives. 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1 0 5 10 15 Ratio of Relevant Sentences # of Documents (a) 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1 0 5 10 15 Ratio of Relevant Sentences # of Documents (b) Figure 2: Histograms of Density Heuristic #1 in Event Narratives (a) vs. Fleeting References (b). A second density heuristic requires that the ratio of different types of roles filled to sentences be > 50% ( |Roles| |AllSents| > 0.5). A third density heuristic requires that the ratio of distinct role fillers to sentences be > 70% (|RoleF illers| |AllSents| > 0.7). If any of these three criteria are satisfied, then the document is considered to have a high density of relevant information.4 We use these heuristics to label a document as an event narrative if: (1) it has a high density of relevant information, and (2) it mentions a role filler within the first 7 sentences. The second row of Table 1 shows the performance of these heuristics on the tuning set. The heuristics correctly identify 40 54 event narratives and 55 62 fleeting reference stories, to achieve an overall accuracy of 82%. These results are undoubtedly optimistic because the heuristics were derived from analysis of the tuning set. But we felt confident enough to move forward with using these heuristics to generate train4Heuristic #1 covers most of the event narratives. ing data for an event narrative classifier. 4.3 Event Narrative Classifier The heuristics above use the answer keys to help determine whether a story belongs to the event narrative genre, but our goal is to create a classifier that can identify event narrative documents without the benefit of answer keys. So we used the heuristics to automatically create training data for a classifier by labelling each relevant document in the training set as an event narrative or a fleeting reference document. Of the 700 relevant documents, 292 were labeled as event narratives. We then trained a document classifier using the 292 event narrative documents as positive instances and all irrelevent training documents as negative instances. The 308 relevant documents that were not identified as event narratives were discarded to minimize noise (i.e., we estimate that our heuristics fail to identify 25% of the event narratives). We then trained an SVM classifier using bag-of-words (unigram) features. Table 2 shows the performance of the event narrative classifier on the manually labeled tuning set. The classifier identified 69% of the event narratives with 63% precision. Overall accuracy was 81%. Recall Precision Accuracy .69 .63 .81 Table 2: Event Narrative Classifier Results At first glance, the performance of this classifier is mediocre. However, these results should be interpreted loosely because there is not always a clear dividing line between event narratives and other documents. For example, some documents begin with a specific event description in the first few paragraphs but then digress to discuss other topics. Fortunately, it is not essential for TIER to have a perfect event narrative classifier since all documents will be processed by the event sentence recognizer anyway. The recall of the event narrative classifier means that nearly 70% of the event narratives will get additional scrutiny, which should help to find additional role fillers. Its precision of 63% means that some documents that are not event narratives will also get additional scrutiny, but information will be extracted only if both the role-specific sentence recognizer and NP extractors believe they have found 1142 Method PerpInd PerpOrg Target Victim Weapon Average Baselines AutoSlog-TS 33/49/40 52/33/41 54/59/56 49/54/51 38/44/41 45/48/46 Semantic Affinity 48/39/43 36/58/45 56/46/50 46/44/45 53/46/50 48/47/47 GLACIER 51/58/54 34/45/38 43/72/53 55/58/56 57/53/55 48/57/52 New Results without document classification AllSent 25/67/36 26/78/39 34/83/49 32/72/45 30/75/43 30/75/42 EventSent 52/54/53 50/44/47 52/67/59 55/51/53 56/57/56 53/54/54 RoleSent 37/54/44 37/58/45 49/75/59 52/60/55 38/66/48 43/63/51 EventSent+RoleSent 38/60/46 36/63/46 47/78/59 52/64/57 36/66/47 42/66/51 New Results with document classification DomDoc/EventSent+DomDoc/RoleSent 45/54/49 42/51/46 51/68/58 54/56/55 46/63/53 48/58/52 EventSent+DomDoc/RoleSent 43/59/50 45/61/52 51/77/61 52/61/56 44/66/53 47/65/54 EventSent+ENarrDoc/RoleSent 48/57/52 46/53/50 51/73/60 56/60/58 53/64/58 51/62/56 Table 3: Experimental results, reported as Precision/Recall/F-score something relevant. 4.4 Domain-relevant Document Classifier For comparison’s sake, we also created a document classifier to identify domain-relevant documents. That is, we trained a classifier to determine whether a document is relevant to the domain of terrorism, irrespective of the style of the document. We trained an SVM classifier with the same bag-ofwords feature set, using all relevant documents in the training set as positive instances and all irrelevant documents as negative instances. We use this classifier for several experiments described in the next section. 5 Evaluation 5.1 Data Set and Metrics We evaluated our approach on a standard benchmark collection for event extraction systems, the MUC-4 data set (MUC-4 Proceedings, 1992). The MUC-4 corpus consists of 1700 documents with associated answer key templates. To be consistent with previously reported results on this data set, we use the 1300 DEV documents for training, 200 documents (TST1+TST2) as a tuning set and 200 documents (TST3+TST4) as the test set. Roughly half of the documents are relevant (i.e., they mention at least 1 terrorist event) and the rest are irrelevant. We evaluate our system on the five MUC-4 “string-fill” event roles: perpetrator individuals, perpetrator organizations, physical targets, victims and weapons. The complete IE task involves template generation, which is complex because many documents have multiple templates (i.e., they discuss multiple events). Our work focuses on extracting individual facts and not on template generation per se (e.g., we do not perform coreference resolution or event tracking). Consequently, our evaluation follows that of other recent work and evaluates the accuracy of the extractions themselves by matching the head nouns of extracted NPs with the head nouns of answer key strings (e.g., “armed guerrillas” is considered to match “guerrillas”)5. Our results are reported as Precision/Recall/F(1)-score for each event role separately. We also show an overall average for all event roles combined.6 5.2 Baselines As baselines, we compare the performance of our IE system with three other event extraction systems. The first baseline is AutoSlog-TS (Riloff, 1996), which uses domain-specific extraction patterns. AutoSlog-TS applies its patterns to every sentence in every document, so does not attempt to explicitly identify relevant sentences or documents. The next two baselines are more recent systems: the (Patwardhan and Riloff, 2007) semantic affinity model and the (Patwardhan and Riloff, 2009) GLACIER system. The semantic affinity approach 5Pronouns were discarded since we do not perform coreference resolution. Duplicate extractions with the same head noun were counted as one hit or one miss. 6We generated the Average scores ourselves by macroaveraging over the scores reported for the individual event roles. 1143 explicitly identifies event sentences and uses patterns that have a semantic affinity for an event role to extract role fillers. GLACIER is a probabilistic model that incorporates both phrasal and sentential evidence jointly to label role fillers. The first 3 rows in Table 3 show the results for each of these systems on the MUC-4 data set. They all used the same evaluation criteria as our results. 5.3 Experimental Results The lower portion of Table 3 shows the results of a variety of event extraction models that we created using different components of our system. The AllSent row shows the performance of our Role Filler Extractors when applied to every sentence in every document. This system produced high recall, but precision was consistently low. The EventSent row shows the performance of our Role Filler Extractors applied only to the event sentences identified by our event sentence classifier. This boosts precision across all event roles, but with a sharp reduction in recall. We see a roughly 20 point swing from recall to precision. These results are similar to GLACIER’s results on most event roles, which isn’t surprising because GLACIER also incorporates event sentence identification. The RoleSent row shows the results of our Role Filler Extractors applied only to the role-specific sentences identified by our classifiers. We see a 1213 point swing from recall to precision compared to the AllSent row. This result is consistent with our hypothesis that many role fillers exist in rolespecific contexts that are not event sentences. As expected, extracting facts from role-specific contexts that do not necessarily refer to an event is less reliable. The EventSent+RoleSent row shows the results when information is extracted from both types of sentences. We see slightly higher recall, which confirms that one set of extractions is not a strict subset of the other, but precision is still relatively low. The next set of experiments incorporates document classification as the third layer of text analysis. The DomDoc/EventSent+DomDoc/RoleSent row shows the results of applying both types of sentence classifiers only to documents identified as domain-relevant by the Domain-relevant Document (DomDoc) Classifier described in Section 4.4. Extracting information only from domain-relevant documents improves precision by +6, but also sacrifices 8 points of recall. The EventSent row reveals that information found in event sentences has the highest precision, even without relying on document classification. We concluded that evidence of an event sentence is probably sufficient to warrant role filler extraction irrespective of the style of the document. As we discussed in Section 4, many documents contain only a fleeting reference to an event, so it is important to be able to extract information from those isolated event descriptions as well. Consequently, we created a system, EventSent+DomDoc/RoleSent, that extracts information from event sentences in all documents, but extracts information from role-specific sentences only if they appear in a domain-relevant document. This architecture captured the best of both worlds: recall improved from 58% to 65% with only a one point drop in precision. Finally, we evaluated the idea of using document genre as a filter instead of domain relevance. The last row, EventSent+ENarrDoc/RoleSent, shows the results of our final architecture which extracts information from event sentences in all documents, but extracts information from role-specific sentences only in Event Narrative documents. This architecture produced the best F1 score of 56. This model increases precision by an additional 4 points and produces the best balance of recall and precision. Overall, TIER’s multi-layered extraction architecture produced higher F1 scores than previous systems on four of the five event roles. The improved recall is due to the additional extractions from secondary contexts. The improved precision comes from our two-pronged strategy of treating event narratives differently from other documents. TIER aggressively searches for extractions in event narrative stories but is conservative and extracts information only from event sentences in all other documents. 5.4 Analysis We looked through some examples of TIER’s output to try to gain insight about its strengths and limitations. TIER’s role-specific sentence classifiers did correctly identify some sentences containing role fillers that were not classified as event sentences. Several examples are shown below, with the role 1144 fillers in italics: (1) “The victims were identified as David Lecky, director of the Columbus school, and James Arthur Donnelly.” (2) “There were seven children, including four of the Vice President’s children, in the home at the time.” (3) “The woman fled and sought refuge inside the facilities of the Salvadoran Alberto Masferrer University, where she took a group of students as hostages, threatening them with hand grenades.” (4) “The FMLN stated that several homes were damaged and that animals were killed in the surrounding hamlets and villages.” The first two sentences identify victims, but the terrorist event itself was mentioned earlier in the document. The third sentence contains a perpetrator (the woman), victims (students), and weapons (hand grenades) in the context of a hostage situation after the main event (a bus attack), when the perpetrator escaped. The fourth sentence describes incidental damage to civilian homes following clashes between government forces and guerrillas. However there is substantial room for improvement in each of TIER’s subcomponents, and many role fillers are still overlooked. One reason is that it can be difficult to recognize acts of terrorism. Many sentences refer to a potentially relevant subevent (e.g., injury or physical damage) but recognizing that the event is part of a terrorist incident depends on the larger discourse. For example, consider the examples below that TIER did not recognize as relevant sentences: (5) “Later, two individuals in a Chevrolet Opala automobile pointed AK rifles at the students, fired some shots, and quickly drove away.” (6) “Meanwhile, national police members who were dressed in civilian clothes seized university students Hugo Martinez and Raul Ramirez, who are still missing.” (7) “All labor union offices in San Salvador were looted.” In the first sentence, the event is described as someone pointing rifles at people and the perpetrators are referred to simply as individuals. There are no strong keywords in this sentence that reveal this is a terrorist attack. In the second sentence, police are being accused of state-sponsored terrorism when they seize civilians. The verb “seize” is common in this corpus, but usually refers to the seizing of weapons or drug stashes, not people. The third sentence describes a looting subevent. Acts of looting and vandalism are not usually considered to be terrorism, but in this article it is in the context of accusations of terrorist acts by government officials. 6 Conclusions We have presented a new approach to event extraction that uses three levels of analysis: document genre classification to identify event narrative stories, two types of sentence classifiers, and noun phrase classifiers. A key contribution of our work is the creation of role-specific sentence classifiers that can detect role fillers in secondary contexts that do not directly refer to the event. Another important aspect of our approach is a two-pronged strategy that handles event narratives differently from other documents. TIER aggressively hunts for role fillers in event narratives, but is conservative about extracting information from other documents. This strategy produced improvements in both recall and precision over previous state-of-the-art systems. This work just scratches the surface of using document genre identification to improve information extraction accuracy. In future work, we hope to identify additional types of document genre styles and incorporate genre directly into the extraction model. Coreference resolution and discourse analysis will also be important to further improve event extraction performance. 7 Acknowledgments We gratefully acknowledge the support of the National Science Foundation under grant IIS-1018314 and the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0172. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the U.S. government. 1145 References D. Appelt, J. Hobbs, J. Bear, D. Israel, and M. Tyson. 1993. FASTUS: a finite-state processor for information extraction from real-world text. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence. R. Bunescu and R. Mooney. 2007. Learning to Extract Relations from the Web using Minimal Supervision. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. M.E. Califf and R. Mooney. 2003. Bottom-up Relational Learning of Pattern Matching rules for Information Extraction. Journal of Machine Learning Research, 4:177–210. H.L. Chieu and H.T. Ng. 2002. A Maximum Entropy Approach to Information Extraction from SemiStructured and Free Text. In Proceedings of the 18th National Conference on Artificial Intelligence. F. Ciravegna. 2001. Adaptive Information Extraction from Text by Rule Induction and Generalisation. In Proceedings of the 17th International Joint Conference on Artificial Intelligence. J. Finkel, T. Grenager, and C. Manning. 2005. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 363–370, Ann Arbor, MI, June. A. Finn and N. Kushmerick. 2004. Multi-level Boundary Classification for Information Extraction. In In Proceedings of the 15th European Conference on Machine Learning, pages 111–122, Pisa, Italy, September. D. Freitag and A. McCallum. 2000. Information Extraction with HMM Structures Learned by Stochastic Optimization. In Proceedings of the Seventeenth National Conference on Artificial Intelligence, pages 584–589, Austin, TX, August. Dayne Freitag. 1998a. Multistrategy Learning for Information Extraction. In Proceedings of the Fifteenth International Conference on Machine Learning. Morgan Kaufmann Publishers. Dayne Freitag. 1998b. Toward General-Purpose Learning for Information Extraction. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics. Z. Gu and N. Cercone. 2006. Segment-Based Hidden Markov Models for Information Extraction. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 481–488, Sydney, Australia, July. L. Hirschman. 1998. ”The Evolution of Evaluation: Lessons from the Message Understanding Conferences. Computer Speech and Language, 12. S. Huffman. 1996. Learning Information Extraction Patterns from Examples. In Stefan Wermter, Ellen Riloff, and Gabriele Scheler, editors, Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing, pages 246–260. SpringerVerlag, Berlin. H. Ji and R. Grishman. 2008. Refining Event Extraction through Cross-Document Inference. In Proceedings of ACL-08: HLT, pages 254–262, Columbus, OH, June. S. Keerthi and D. DeCoste. 2005. A Modified Finite Newton Method for Fast Solution of Large Scale Linear SVMs. Journal of Machine Learning Research. J. Kim and D. Moldovan. 1993. Acquisition of Semantic Patterns for Information Extraction from Corpora. In Proceedings of the Ninth IEEE Conference on Artificial Intelligence for Applications, pages 171–176, Los Alamitos, CA. IEEE Computer Society Press. W. Lehnert, C. Cardie, D. Fisher, E. Riloff, and R. Williams. 1991. University of Massachusetts: Description of the CIRCUS System as Used for MUC3. In Proceedings of the Third Message Understanding Conference (MUC-3), pages 223–233, San Mateo, CA. Morgan Kaufmann. Y. Li, K. Bontcheva, and H. Cunningham. 2005. Using Uneven Margins SVM and Perceptron for Information Extraction. In Proceedings of Ninth Conference on Computational Natural Language Learning, pages 72–79, Ann Arbor, MI, June. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of the 48st Annual Meeting on Association for Computational Linguistics (ACL-10). M. Maslennikov and T. Chua. 2007. A Multi-Resolution Framework for Information Extraction from Free Text. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics. MUC-4 Proceedings. 1992. Proceedings of the Fourth Message Understanding Conference (MUC-4). Morgan Kaufmann. S. Patwardhan and E. Riloff. 2007. Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions. In Proceedings of 2007 the Conference on Empirical Methods in Natural Language Processing (EMNLP-2007). S. Patwardhan and E. Riloff. 2009. A Unified Model of Phrasal and Sentential Evidence for Information Extraction. In Proceedings of 2009 the Conference on Empirical Methods in Natural Language Processing (EMNLP-2009). E. Riloff and R. Jones. 1999. Learning Dictionaries for Information Extraction by Multi-Level Bootstrapping. In Proceedings of the Sixteenth National Conference on Artificial Intelligence. 1146 E. Riloff and W. Phillips. 2004. An Introduction to the Sundance and AutoSlog Systems. Technical Report UUCS-04-015, School of Computing, University of Utah. E. Riloff. 1993. Automatically Constructing a Dictionary for Information Extraction Tasks. In Proceedings of the 11th National Conference on Artificial Intelligence. E. Riloff. 1996. Automatically Generating Extraction Patterns from Untagged Text. In Proceedings of the Thirteenth National Conference on Artificial Intelligence, pages 1044–1049. The AAAI Press/MIT Press. D. Roth and W. Yih. 2001. Relational Learning via Propositional Algorithms: An Information Extraction Case Study. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pages 1257–1263, Seattle, WA, August. Satoshi Sekine. 2006. On-demand information extraction. In Proceedings of Joint Conference of the International Committee on Computational Linguistics and the Association for Computational Linguistics (COLING/ACL-06. Y. Shinyama and S. Sekine. 2006. Preemptive Information Extraction using Unrestricted Relation Discovery. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 304– 311, New York City, NY, June. S. Soderland, D. Fisher, J. Aseltine, and W. Lehnert. 1995. CRYSTAL: Inducing a conceptual dictionary. In Proc. of the Fourteenth International Joint Conference on Artificial Intelligence, pages 1314–1319. M. Stevenson and M. Greenwood. 2005. A Semantic Approach to IE Pattern Induction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 379–386, Ann Arbor, MI, June. K. Sudo, S. Sekine, and R. Grishman. 2003. An Improved Extraction Pattern Representation Model for Automatic IE Pattern Acquisition. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03). R. Yangarber, R. Grishman, P. Tapanainen, and S. Huttunen. 2000. Automatic Acquisition of Domain Knowledge for Information Extraction. In Proceedings of the Eighteenth International Conference on Computational Linguistics (COLING 2000). K. Yu, G. Guan, and M. Zhou. 2005. Resum´e Information Extraction with Cascaded Hybrid Model. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 499–506, Ann Arbor, MI, June. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel Methods for Relation Extraction. Journal of Machine Learning Research, 3. 1147
2011
114
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1148–1158, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Knowledge Base Population: Successful Approaches and Challenges Heng Ji Ralph Grishman Computer Science Department Computer Science Department Queens College and Graduate Center City University of New York New York University New York, NY 11367, USA New York, NY 10003, USA [email protected] [email protected] Abstract In this paper we give an overview of the Knowledge Base Population (KBP) track at the 2010 Text Analysis Conference. The main goal of KBP is to promote research in discovering facts about entities and augmenting a knowledge base (KB) with these facts. This is done through two tasks, Entity Linking – linking names in context to entities in the KB – and Slot Filling – adding information about an entity to the KB. A large source collection of newswire and web documents is provided from which systems are to discover information. Attributes (“slots”) derived from Wikipedia infoboxes are used to create the reference KB. In this paper we provide an overview of the techniques which can serve as a basis for a good KBP system, lay out the remaining challenges by comparison with traditional Information Extraction (IE) and Question Answering (QA) tasks, and provide some suggestions to address these challenges. 1 Introduction Traditional information extraction (IE) evaluations, such as the Message Understanding Conferences (MUC) and Automatic Content Extraction (ACE), assess the ability to extract information from individual documents in isolation. In practice, however, we may need to gather information about a person or organization that is scattered among the documents of a large collection. This requires the ability to identify the relevant documents and to integrate facts, possibly redundant, possibly complementary, possibly in conflict, coming from these documents. Furthermore, we may want to use the extracted information to augment an existing data base. This requires the ability to link individuals mentioned in a document, and information about these individuals, to entries in the data base. On the other hand, traditional Question Answering (QA) evaluations made limited efforts at disambiguating entities in queries (e.g. Pizzato et al., 2006), and limited use of relation/event extraction in answer search (e.g. McNamee et al., 2008). The Knowledge Base Population (KBP) shared task, conducted as part of the NIST Text Analysis Conference, aims to address and evaluate these capabilities, and bridge the IE and QA communities to promote research in discovering facts about entities and expanding a knowledge base with these facts. KBP is done through two separate subtasks, Entity Linking and Slot Filling; in 2010, 23 teams submitted results for one or both sub-tasks. A variety of approaches have been proposed to address both tasks with considerable success; nevertheless, there are many aspects of the task that remain unclear. What are the fundamental techniques used to achieve reasonable performance? What is the impact of each novel method? What types of problems are represented in the current KBP paradigm compared to traditional IE and QA? In which way have the current testbeds and evaluation methodology affected our perception of the task difficulty? Have we reached a performance ceiling with current state of the art techniques? What are the remaining challenges and what are the possible ways to address these challenges? In this paper we aim to answer some of these questions based on our detailed analysis of evaluation results. 1148 2 Task Definition and Evaluation Metrics This section will summarize the tasks conducted at KBP 2010. The overall goal of KBP is to automatically identify salient and novel entities, link them to corresponding Knowledge Base (KB) entries (if the linkage exists), then discover attributes about the entities, and finally expand the KB with any new attributes. In the Entity Linking task, given a person (PER), organization (ORG) or geo-political entity (GPE, a location with a government) query that consists of a name string and a background document containing that name string, the system is required to provide the ID of the KB entry to which the name refers; or NIL if there is no such KB entry. The background document, drawn from the KBP corpus, serves to disambiguate ambiguous name strings. In selecting among the KB entries, a system could make use of the Wikipedia text associated with each entry as well as the structured fields of each entry. In addition, there was an optional task where the system could only make use of the structured fields; this was intended to be representative of applications where no backing text was available. Each site could submit up to three runs with different parameters. The goal of Slot Filling is to collect from the corpus information regarding certain attributes of an entity, which may be a person or some type of organization. Each query in the Slot Filling task consists of the name of the entity, its type (person or organization), a background document containing the name (again, to disambiguate the query in case there are multiple entities with the same name), its node ID (if the entity appears in the knowledge base), and the attributes which need not be filled. Attributes are excluded if they are already filled in the reference data base and can only take on a single value. Along with each slot fill, the system must provide the ID of a document which supports the correctness of this fill. If the corpus does not provide any information for a given attribute, the system should generate a NIL response (and no document ID). KBP2010 defined 26 types of attributes for persons (such as the age, birthplace, spouse, children, job title, and employing organization) and 16 types of attributes for organizations (such as the top employees, the founder, the year founded, the headquarters location, and subsidiaries). Some of these attributes are specified as only taking a single value (e.g., birthplace), while some can take multiple values (e.g., top employees). The reference KB includes hundreds of thousands of entities based on articles from an October 2008 dump of English Wikipedia which includes 818,741 nodes. The source collection includes 1,286,609 newswire documents, 490,596 web documents and hundreds of transcribed spoken documents. To score Entity Linking, we take each query and check whether the KB node ID (or NIL) returned by a system is correct or not. Then we compute the Micro-averaged Accuracy, computed across all queries. To score Slot Filling, we first pool all the system responses (as is done for information retrieval evaluations) together with a set of manuallyprepared slot fills. These responses are then assessed by hand. Equivalent answers (such as “Bill Clinton” and “William Jefferson Clinton”) are grouped into equivalence classes. Each system response is rated as correct, wrong, or redundant (a response which is equivalent to another response for the same slot or an entry already in the knowledge base). Given these judgments, we count Correct = total number of non-NIL system output slots judged correct System = total number of non-NIL system output slots Reference = number of single-valued slots with a correct non-NIL response + number of equivalence classes for all listvalued slots Recall = Correct / Reference Precision = Correct / System F-Measure = (2 × Recall × Precision) / (Recall + Precision) 3 Entity Linking: What Works In Entity Linking, we saw a general improvement in performance over last year’s results – the top system achieved 85.78% micro-averaged accuracy. When measured against a benchmark based on inter-annotator agreement, two systems’ performance approached and one system exceeded the benchmark on person entities. 3.1 A General Architecture A typical entity linking system architecture is depicted in Figure 1. 1149 Figure 1. General Entity Linking System Architecture It includes three steps: (1) query expansion – expand the query into a richer set of forms using Wikipedia structure mining or coreference resolution in the background document. (2) candidate generation – finding all possible KB entries that a query might link to; (3) candidate ranking – rank the probabilities of all candidates and NIL answer. Table 1 summarizes the systems which exploited different approaches at each step. In the following subsections we will highlight the new and effective techniques used in entity linking. 3.2 Wikipedia Structure Mining Wikipedia articles are peppered with structured information and hyperlinks to other (on average 25) articles (Medelyan et al., 2009). Such information provides additional sources for entity linking: (1). Query Expansion: For example, WebTLab (Fernandez et al., 2010) used Wikipedia link structure (source, anchors, redirects and disambiguation) to extend the KB and compute entity cooccurrence estimates. Many other teams including CUNY and Siel used redirect pages and disambiguation pages for query expansion. The Siel team also exploited bold texts from first paragraphs because they often contain nicknames, alias names and full names. Methods System Examples System Ranking Range Wikipedia Hyperlink Mining CUNY (Chen et al., 2010), NUSchime (Zhang et al., 2010), Siel (Bysani et al., 2010), SMU-SIS (Gottipati et al., 2010), USFD (Yu et al., 2010), WebTLab team (Fernandez et al., 2010) [2, 15] Query Expansion Source document coreference resolution CUNY (Chen et al., 2010) 9 Document semantic analysis and context modeling ARPANI (Thomas et al., 2010), CUNY (Chen et al., 2010), LCC (Lehmann et al., 2010) [1,14] Candidate Generation IR CUNY (Chen et al., 2010), Budapestacad (Nemeskey et al., 2010), USFD (Yu et al., 2010) [9, 16] Unsupervised Similarity Computation (e.g. VSM) CUNY (Chen et al., 2010), SMU-SIS (Gottipati et al., 2010), USFD (Yu et al., 2010) [9, 14] Supervised Classification LCC (Lehmann et al., 2010), NUSchime (Zhang et al., 2010), Stanford-UBC (Chang et al., 2010), HLTCOE (McNamee, 2010), UC3M (Pablo-Sanchez et al., 2010) [1, 10] Rule-based LCC (Lehmann et al., 2010), BuptPris (Gao et al., 2010) [1, 8] Global Graph-based Ranking CMCRC (Radford et al., 2010) 3 Candidate Ranking IR Budapestacad (Nemeskey et al., 2010) 16 Table 1. Entity Linking Method Comparison Query Query Expansion Wiki hyperlink mining Source doc Coreference Resolution KB Node Candidate Generation KB Node Candidate Ranking Wiki KB +Texts unsupervised similarity computation supervised classification IR Answer IR Document semantic analysis Graph -based 1150 (2). Candidate Ranking: Stanford-UBC used Wikipedia hyperlinks (clarification, disambiguation, title) for query re-mapping, and encoded lexical and part-of-speech features from Wikipedia articles containing hyperlinks to the queries to train a supervised classifier; they reported a significant improvement on micro-averaged accuracy, from 74.85% to 82.15%. In fact, when the mined attributes become rich enough, they can be used as an expanded query and sent into an information retrieval engine in order to obtain the relevant source documents. Budapestacad team (Nemeskey et al., 2010) adopted this strategy. 3.3 Ranking Approach Comparison The ranking approaches exploited in the KBP2010 entity linking systems can be generally categorized into four types: (1). Unsupervised or weakly-supervised learning, in which annotated data is minimally used to tune thresholds and parameters. The similarity measure is largely based on the unlabeled contexts. (2). Supervised learning, in which a pair of entity and KB node is modeled as an instance for classification. Such a classifier can be learned from the annotated training data based on many different features. (3). Graph-based ranking, in which context entities are taken into account in order to reach a global optimized solution together with the query entity. (4). IR (Information Retrieval) approach, in which the entire background source document is considered as a single query to retrieve the most relevant Wikipedia article. The first question we will investigate is how much higher performance can be achieved by using supervised learning? Among the 16 entity linking systems which participated in the regular evaluation, LCC (Lehmann et al., 2010), HLTCOE (McNamee, 2010), Stanford-UBC (Chang et al., 2010), NUSchime (Zhang et al., 2010) and UC3M (Pablo-Sanchez et al., 2010) have explicitly used supervised classification based on many lexical and name tagging features, and most of them are ranked in top 6 in the evaluation. Therefore we can conclude that supervised learning normally leads to a reasonably good performance. However, a highperforming entity linking system can also be implemented in an unsupervised fashion by exploiting effective characteristics and algorithms, as we will discuss in the next sections. 3.4 Semantic Relation Features Almost all entity linking systems have used semantic relations as features (e.g. BuptPris (Gao et al., 2010), CUNY (Chen et al., 2010) and HLTCOE). The semantic features used in the BuptPris system include name tagging, infoboxes, synonyms, variants and abbreviations. In the CUNY system, the semantic features are automatically extracted from their slot filling system. The results are summarized in Table 2, showing the gains over a baseline system (using only Wikipedia title features in the case of BuptPris, using tf-idf weighted word features for CUNY). As we can see, except for person entities in the BuptPris system, all types of entities have obtained significant improvement by using semantic features in entity linking. System Using Semantic Features PER ORG GPE Overall No 83.89 59.47 33.38 58.93 BuptPris Yes 79.09 74.13 66.62 73.29 No 84.55 63.07 57.54 59.91 CUNY Yes 92.81 65.73 84.10 69.29 Table 2. Impact of Semantic Features on Entity Linking (Micro-Averaged Accuracy %) 3.5 Context Inference In the current setting of KBP, a set of target entities is provided to each system in order to simplify the task and its evaluation, because it’s not feasible to require a system to generate answers for all possible entities in the entire source collection. However, ideally a fully-automatic KBP system should be able to automatically discover novel entities (“queries”) which have no KB entry or few slot fills in the KB, extract their attributes, and conduct global reasoning over these attributes in order to generate the final output. At the very least, due to the semantic coherence principle (McNamara, 2001), the information of an entity depends on the information of other entities. For example, the WebTLab team and the CMCRC team extracted all entities in the context of a given query, and disambiguated all entities at the same time using a PageRank-like algorithm (Page et al., 1998) or a Graph-based Re-ranking algorithm. The SMU-SIS team (Gottipati and Jiang, 2010) re-formulated queries using contexts. The LCC team modeled 1151 contexts using Wikipedia page concepts, and computed linkability scores iteratively. Consistent improvements were reported by the WebTLab system (from 63.64% to 66.58%). 4 Entity Linking: Remaining Challenges 4.1 Comparison with Traditional Crossdocument Coreference Resolution Part of the entity linking task can be modeled as a cross-document entity resolution problem which includes two principal challenges: the same entity can be referred to by more than one name string and the same name string can refer to more than one entity. The research on cross-document entity coreference resolution can be traced back to the Web People Search task (Artiles et al., 2007) and ACE2008 (e.g. Baron and Freedman, 2008). Compared to WePS and ACE, KBP requires linking an entity mention in a source document to a knowledge base with or without Wikipedia articles. Therefore sometimes the linking decisions heavily rely on entity profile comparison with Wikipedia infoboxes. In addition, KBP introduced GPE entity disambiguation. In source documents, especially in web data, usually few explicit attributes about GPE entities are provided, so an entity linking system also needs to conduct external knowledge discovery from background related documents or hyperlink mining. 4.2 Analysis of Difficult Queries There are 2250 queries in the Entity Linking evaluation; for 58 of them at most 5 (out of the 46) system runs produced correct answers. Most of these queries have corresponding KB entries. For 19 queries all 46 systems produced different results from the answer key. Interestingly, the systems which perform well on the difficult queries are not necessarily those achieved top overall performance – they were ranked 13rd, 6th, 5th, 12nd, 10th, and 16th respectively for overall queries. 11 queries are highly ambiguous city names which can exist in many states or countries (e.g. “Chester”), or refer to person or organization entities. From these most difficult queries we observed the following challenges and possible solutions. • Require deep understanding of context entities for GPE queries In a document where the query entity is not a central topic, the author often assumes that the readers have enough background knowledge (‘anchor’ location from the news release information, world knowledge or related documents) about these entities. For 6 queries, a system would need to interpret or extract attributes for their context entities. For example, in the following passage: …There are also photos of Jake on IHJ in Brentwood, still looking somber… in order to identify that the query “Brentwood” is located in California, a system will need to understand that “IHJ” is “I heart Jake community” and that the “Jake” referred to lives in Los Angeles, of which Brentwood is a part. In the following example, a system is required to capture the knowledge that “Chinese Christian man” normally appears in “China” or there is a “Mission School” in “Canton, China” in order to link the query “Canton” to the correct KB entry. This is a very difficult query also because the more common way of spelling “Canton” in China is “Guangdong”. …and was from a Mission School in Canton, … but for the energetic efforts of this Chinese Christian man and the Refuge Matron… • Require external hyperlink analysis Some queries require a system to conduct detailed analysis on the hyperlinks in the source document or the Wikipedia document. For example, in the source document “…Filed under: Falcons <http://sports.aol.com/fanhouse/category/atlantafalcons/>”, a system will need to analyze the document which this hyperlink refers to. Such cases might require new query reformulation and cross-document aggregation techniques, which are both beyond traditional entity disambiguation paradigms. 1152 • Require Entity Salience Ranking Some of these queries represent salient entities and so using web popularity rank (e.g. ranking/hit counts of Wikipedia pages from search engine) can yield correct answers in most cases (Bysani et al., 2010; Dredze et al., 2010). In fact we found that a naïve candidate ranking approach based on web popularity alone can achieve 71% micro-averaged accuracy, which is better than 24 system runs in KBP2010. Since the web information is used as a black box (including query expansion and query log analysis) which changes over time, it’s more difficult to duplicate research results. However, gazetteers with entities ranked by salience or major entities marked are worth encoding as additional features. For example, in the following passages: ... Tritschler brothers competed in gymnastics at the 1904 Games in St Louis 104 years ago” and “A chartered airliner carrying Democratic White House hopeful Barack Obama was forced to make an unscheduled landing on Monday in St. Louis after its flight crew detected mechanical problems… although there is little background information to decide where the query “St Louis” is located, a system can rely on such a major city list to generate the correct linking. Similarly, if a system knows that “Georgia Institute of Technology” has higher salience than “Georgian Technical University”, it can correctly link a query “Georgia Tech” in most cases. 5 Slot Filling: What Works 5.1 A General Architecture The slot-filling task is a hybrid of traditional IE (a fixed set of relations) and QA (responding to a query, generating a unified response from a large collection). Most participants met this challenge through a hybrid system which combined aspects of QA (passage retrieval) and IE (answer extraction). A few used off-the-shelf QA, either bypassing question analysis or (if QA was used as a “black box”) creating a set of questions corresponding to each slot. The basic system structure (Figure 2) involved three phases: document/passage retrieval (retrieving passages involving the queried entity), answer extraction (getting specific answers from the retrieved passages), and answer combination (merging and selecting among the answers extracted). The solutions adopted for answer extraction reflected the range of current IE methods as well as QA answer extraction techniques (see Table 3). Most systems used one main pipeline, while CUNY and BuptPris adopted a hybrid approach of combining multiple approaches. One particular challenge for KBP, in comparison with earlier IE tasks, was the paucity of training data. The official training data, linked to specific text from specific documents, consisted of responses to 100 queries; the participants jointly prepared responses to another 50. So traditional supervised learning, based directly on the training data, would provide limited coverage. Coverage could be improved by using the training data as seeds for a bootstrapping procedure. Figure 2. General Slot Filling System Architecture IE (Distant Learning/ Bootstrapping) Query Source Collection IR Document Level IR, QA Sentence/Passage Level Pattern Answer Level Classifier QA Training Data/ External KB Rules Answers Query Expansion Knowledge Base Redundancy Removal 1153 Methods System Examples Distant Learning (large seed, one iteration) CUNY (Chen et al., 2010) Pattern Learning Bootstrapping (small seed, multiple iterations) NYU (Grishman and Min, 2010) Distant Supervision Budapestacad (Nemeskey et al., 2010), lsv (Chrupala et al., 2010), Stanford (Surdeanu et al., 2010), UBC (Intxaurrondo et al., 2010) Trained IE Supervised Classifier Trained from KBP training data and other related tasks BuptPris (Gao et al., 2010), CUNY (Chen et al., 2010), IBM (Castelli et al., 2010), ICL (Song et al., 2010), LCC (Lehmann et al., 2010), lsv (Chrupala et al., 2010), Siel (Bysani et al., 2010) QA CUNY (Chen et al., 2010), iirg (Byrne and Dunnion, 2010) Hand-coded Heuristic Rules BuptPris (Gao et al., 2010), USFD (Yu et al., 2010) Table 3. Slot Filling Answer Extraction Method Comparison On the other hand, there were a lot of 'facts' available – pairs of entities bearing a relationship corresponding closely to the KBP relations – in the form of filled Wikipedia infoboxes. These could be used for various forms of indirect or distant learning, where instances in a large corpus of such pairs are taken as (positive) training instances. However, such instances are noisy – if a pair of entities participates in more than one relation, the found instance may not be an example of the intended relation – and so some filtering of the instances or resulting patterns may be needed. Several sites used such distant supervision to acquire patterns or train classifiers, in some cases combined with direct supervision using the training data (Chrupala et al., 2010). Several groups used and extended existing relation extraction systems, and then mapped the results into KBP slots. Mapping the ACE relations and events by themselves provided limited coverage (34% of slot fills in the training data), but was helpful when combined with other sources (e.g. CUNY). Groups with more extensive existing extraction systems could primarily build on these (e.g. LCC, IBM). For example, IBM (Castelli et al., 2010) extended their mention detection component to cover 36 entity types which include many non-ACE types; and added new relation types between entities and event anchors. LCC and CUNY applied active learning techniques to cover non-ACE types of entities, such as “origin”, “religion”, “title”, “charge”, “web-site” and “cause-of-death”, and effectively develop lexicons to filter spurious answers. Top systems also benefited from customizing and tightly integrating their recently enhanced extraction techniques into KBP. For example, IBM, NYU (Grishman and Min, 2010) and CUNY exploited entity coreference in pattern learning and reasoning. It is also notable that traditional extraction components trained from newswire data suffer from noise in web data. In order to address this problem, IBM applied their new robust mention detection techniques for noisy inputs (Florian et al., 2010); CUNY developed a component to recover structured forms such as tables in web data automatically and filter spurious answers. 5.2 Use of External Knowledge Base Many instance-centered knowledge bases that have harvested Wikipedia are proliferating on the semantic web. The most well known are probably the Wikipedia derived resources, including DBpedia (Auer 2007), Freebase (Bollacker 2008) and YAGO (Suchanek et al., 2007) and Linked Open Data (http://data.nytimes.com/). The main motivation of the KBP program is to automatically distill information from news and web unstructured data instead of manually constructed knowledge bases, but these existing knowledge bases can provide a large number of seed tuples to bootstrap slot filling or guide distant learning. Such resources can also be used in a more direct way. For example, CUNY exploited Freebase and LCC exploited DBpedia as fact validation in slot filling. However, most of these resources are manually created from single data modalities and only cover well-known entities. For example, while Freebase contains 116 million instances of 1154 7,300 relations for 9 million entities, it only covers 48% of the slot types and 5% of the slot answers in KBP2010 evaluation data. Therefore, both CUNY and LCC observed limited gains from the answer validation approach from Freebase. Both systems gained about 1% improvement in recall with a slight loss in precision. 5.3 Cross-Slot and Cross-Query Reasoning Slot Filling can also benefit from extracting revertible queries from the context of any target query, and conducting global ranking or reasoning to refine the results. CUNY and IBM developed recursive reasoning components to refine extraction results. For a given query, if there are no other related answer candidates available, they built "revertible” queries in the contexts, similar to (Prager et al., 2006), to enrich the inference process iteratively. For example, if a is extracted as the answer for org:subsidiaries of the query q, we can consider a as a new revertible query and verify that a org:parents answer of a is q. Both systems significantly benefited from recursive reasoning (CUNY F-measure on training data was enhanced from 33.57% to 35.29% and IBM F-measure was enhanced from 26% to 34.83%). 6 Slot Filling: Remaining Challenges Slot filling remains a very challenging task; only one system exceeded 30% F-measure on the 2010 evaluation. During the 2010 evaluation data annotation/adjudication process, an initial answer key annotation was created by a manual search of the corpus (resulting in 797 instances), and then an independent adjudication pass was applied to assess these annotations together with pooled system responses. The Precision, Recall and F-measure for the initial human annotation are only about 70%, 54% and 61% respectively. While we believe the annotation consistency can be improved, in part by refinement of the annotation guidelines, this does place a limit on system performance. Most of the shortfall in system performance reflects inadequacies in the answer extraction stage, reflecting limitations in the current state-of-the-art in information extraction. An analysis of the 2010 training data shows that cross-sentence coreference and some types of inference are critical to slot filling. In only 60.4% of the cases do the entity name and slot fill appear together in the same sentence, so a system which processes sentences in isolation is severely limited in its performance. 22.8% of the cases require cross-sentence (identity) coreference; 15% require some cross-sentence inference and 1.8% require cross-slot inference. The inferences include: • Non-identity coreference: in the following passage: “Lahoud is married to an Armenian and the couple have three children. Eldest son Emile Emile Lahoud was a member of parliament between 2000 and 2005.” the semantic relation between “children” and “son” needs to be exploited in order to generate “Emile Emile Lahoud” as the per:children of the query entity “Lahoud”; • Cross-slot inference based on revertible queries, propagation links or even world knowledge to capture some of the most challenging cases. In the KBP slot filling task, slots are often dependent on each other, so we can improve the results by improving the “coherence” of the story (i.e. consistency among all generated answers (query profiles)). In the following example: “People Magazine has confirmed that actress Julia Roberts has given birth to her third child a boy named Henry Daniel Moder. Henry was born Monday in Los Angeles and weighed 8? lbs. Roberts, 39, and husband Danny Moder, 38, are already parents to twins Hazel and Phinnaeus who were born in November 2006.” the following reasoning rules are needed to generate the answer “Henry Daniel Moder” as per:children of “Danny Moder”: ChildOf (“Henry Daniel Moder”, “Julia Roberts”) ∧ Coreferential (“Julia Roberts”, “Roberts”) ∧ SpouseOf (“Roberts”, “Danny Moder”) → ChildOf (“Henry Daniel Moder”, “Danny Moder”) KBP Slot Filling is similar to ACE Relation Extraction, which has been extensively studied for the past 7 years. However, the amount of training data is much smaller, forcing sites to adjust their training strategies. Also, some of the constraints of ACE relation mention extraction – notably, that both arguments are present in the same sentence – are not present, making the role of coreference and cross-sentence inference more critical. The role of coreference and inference as limiting factors, while generally recognized, is emphasized 1155 by examining the 163 slot values that the human annotators filled but that none of the systems were able to get correct. Many of these difficult cases involve a combination of problems, but we estimate that at least 25% of the examples involve coreference which is beyond current system capabilities, such as nominal anaphors: “Alexandra Burke is out with the video for her second single … taken from the British artist’s debut album” “a woman charged with running a prostitution ring … her business, Pamela Martin and Associates” (underlined phrases are coreferential). While the types of inferences which may be required is open-ended, certain types come up repeatedly, reflecting the types of slots to be filled: systems would benefit from specialists which are able to reason about times, locations, family relationships, and employment relationships. 7 Toward System Combination The increasing number of diverse approaches based on different resources provide new opportunities for both entity linking and slot filling tasks to benefit from system combination. The NUSchime entity linking system trained a SVM based re-scoring model to combine two individual pipelines. Only one feature based on confidence values from the pipelines was used for rescoring. The micro-averaged accuracy was enhanced from 79.29%/79.07% to 79.38% after combination. We also applied a voting approach on the top 9 entity linking systems and found that all combination orders achieved significant gains, with the highest absolute improvement of 4.7% in micro-averaged accuracy over the top entity linking system. The CUNY slot filling system trained a maximum-entropy-based re-ranking model to combine three individual pipelines, based on various global features including voting and dependency relations. Significant gain in F-measure was achieved: from 17.9%, 27.7% and 21.0% (on training data) to 34.3% after combination. When we applied the same re-ranking approach to the slot filling systems which were ranked from the 2nd to 14th, we achieved 4.3% higher F-score than the best of these systems. 8 Conclusion Compared to traditional IE and QA tasks, KBP has raised some interesting and important research issues: It places more emphasis on cross-document entity resolution which received limited effort in ACE; it forces systems to deal with redundant and conflicting answers across large corpora; it links the facts in text to a knowledge base so that NLP and data mining/database communities have a better chance to collaborate; it provides opportunities to develop novel training methods such as distant (and noisy) supervision through Infoboxes (Surdeanu et al., 2010; Chen et al., 2010). In this paper, we provided detailed analysis of the reasons which have made KBP a more challenging task, shared our observations and lessons learned from the evaluation, and suggested some possible research directions to address these challenges which may be helpful for current and new participants, or IE and QA researchers in general. Acknowledgements The first author was supported by the U.S. Army Research Laboratory under Cooperative Agreement Number W911NF-09-2-0053, the U.S. NSF CAREER Award under Grant IIS-0953149 and PSC-CUNY Research Program. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon. References Javier Artiles, Julio Gonzalo and Satoshi Sekine. 2007. The SemEval-2007 WePS Evaluation: Establishing a benchmark for the Web People Search Task. Proc. the 4th International Workshop on Semantic Evaluations (Semeval-2007). S. Auer, C. Bizer, G. Kobilarov, J. Lehmann and Z. Ives. 2007. DBpedia: A nucleus for a web of open data. Proc. 6th International Semantic Web Conference. K. Balog, L. Azzopardi, M. de Rijke. 2008. Personal Name Resolution of Web People Search. Proc. WWW2008 Workshop: NLP Challenges in the Information Explosion Era (NLPIX 2008). 1156 Alex Baron and Marjorie Freedman. 2008. Who is Who and What is What: Experiments in Cross-Document Co-Reference. Proc. EMNLP 2008. K. Bollacker, R. Cook, and P. Tufts. 2007. Freebase: A Shared Database of Structured General Human Knowledge. Proc. National Conference on Artificial Intelligence (Volume 2). Lorna Byrne and John Dunnion. 2010. UCD IIRG at TAC 2010. Proc. TAC 2010 Workshop. Praveen Bysani, Kranthi Reddy, Vijay Bharath Reddy, Sudheer Kovelamudi, Prasad Pingali and Vasudeva Varma. 2010. IIIT Hyderabad in Guided Summarization and Knowledge Base Population. Proc. TAC 2010 Workshop. Vittorio Castelli, Radu Florian and Ding-jung Han. 2010. Slot Filling through Statistical Processing and Inference Rules. Proc. TAC 2010 Workshop. Angel X. Chang, Valentin I. Spitkovsky, Eric Yeh, Eneko Agirre and Christopher D. Manning. 2010. Stanford-UBC Entity Linking at TAC-KBP. Proc. TAC 2010 Workshop. Zheng Chen, Suzanne Tamang, Adam Lee, Xiang Li, Wen-Pin Lin, Matthew Snover, Javier Artiles, Marissa Passantino and Heng Ji. 2010. CUNYBLENDER TAC-KBP2010 Entity Linking and Slot Filling System Description. Proc. TAC 2010 Workshop. Grzegorz Chrupala, Saeedeh Momtazi, Michael Wiegand, Stefan Kazalski, Fang Xu, Benjamin Roth, Alexandra Balahur, Dietrick Klakow. Saarland University Spoken Language Systems at the Slot Filling Task of TAC KBP 2010. Proc. TAC 2010 Workshop. Mark Dredze, Paul McNamee, Delip Rao, Adam Gerber and Tim Finin. 2010. Entity Disambiguation for Knowledge Base Population. Proc. COLING 2010. Norberto Fernandez, Jesus A. Fisteus, Luis Sanchez and Eduardo Martin. 2010. WebTLab: A Cooccurencebased Approach to KBP 2010 Entity-Linking Task. Proc. TAC 2010 Workshop. Radu Florian, John F. Pitrelli, Salim Roukos and Imed Zitouni. 2010. Improving Mention Detection Robustness to Noisy Input. Proc. EMNLP2010. Sanyuan Gao, Yichao Cai, Si Li, Zongyu Zhang, Jingyi Guan, Yan Li, Hao Zhang, Weiran Xu and Jun Guo. 2010. PRIS at TAC2010 KBP Track. Proc. TAC 2010 Workshop. Swapna Gottipati and Jing Jiang. 2010. SMU-SIS at TAC 2010 – KBP Track Entity Linking. Proc. TAC 2010 Workshop. Ralph Grishman and Bonan Min. 2010. New York University KBP 2010 Slot-Filling System. Proc. TAC 2010 Workshop. Ander Intxaurrondo, Oier Lopez de Lacalle and Eneko Agirre. 2010. UBC at Slot Filling TAC-KBP2010. Proc. TAC 2010 Workshop. John Lehmann, Sean Monahan, Luke Nezda, Arnold Jung and Ying Shi. 2010. LCC Approaches to Knowledge Base Population at TAC 2010. Proc. TAC 2010 Workshop. Paul McNamee and Hoa Dang. 2009. Overview of the TAC 2009 Knowledge Base Population Track. Proc. TAC 2009 Workshop. Paul McNamee, Hoa Trang Dang, Heather Simpson, Patrick Schone and Stephanie M. Strassel. 2010. An Evaluation of Technologies for Knowledge Base Population. Proc. LREC2010. Paul McNamee, Rion Snow, Patrick Schone and James Mayfield. 2008. Learning Named Entity Hyponyms for Question Answering. Proc. IJCNLP2008. Paul McNamee. 2010. HLTCOE Efforts in Entity Linking at TAC KBP 2010. Proc. TAC 2010 Workshop. Danielle S McNamara. 2001. Reading both Highcoherence and Low-coherence Texts: Effects of Text Sequence and Prior Knowledge. Canadian Journal of Experimental Psychology. Olena Medelyan, Catherine Legg, David Milne and Ian H. Witten. 2009. Mining Meaning from Wikipedia. International Journal of Human-Computer Studies archive. Volume 67 , Issue 9. David Nemeskey, Gabor Recski, Attila Zseder and Andras Kornai. 2010. BUDAPESTACAD at TAC 2010. Proc. TAC 2010 Workshop. Cesar de Pablo-Sanchez, Juan Perea and Paloma Martinez. 2010. Combining Similarities with Regression based Classifiers for Entity Linking at TAC 2010. Proc. TAC 2010 Workshop. Lawrence Page, Sergey Brin, Rajeev Motwani and Terry Winograd. 1998. The PageRank Citation Ranking: Bringing Order to the Web. Proc. the 7th International World Wide Web Conference. Luiz Augusto Pizzato, Diego Molla and Cecile Paris. 2006. Pseudo Relevance Feedback Using Named Entities for Question Answering. Proc. the Australasian Language Technology Workshop 2006. J. Prager, P. Duboue, and J. Chu-Carroll. 2006. Improving QA Accuracy by Question Inversion. Proc. ACLCOLING 2006. 1157 Will Radford, Ben Hachey, Joel Nothman, Matthew Honnibal and James R. Curran. 2010. CMCRC at TAC10: Document-level Entity Linking with Graphbased Re-ranking. Proc. TAC 2010 Workshop. Yang Song, Zhengyan He and Houfeng Wang. 2010. ICL_KBP Approaches to Knowledge Base Population at TAC2010. Proc. TAC 2010 Workshop. F. M. Suchanek, G. Kasneci, and G. Weikum. 2007. Yago: A Core of Semantic Knowledge. Proc. 16th International World Wide Web Conference. Mihai Surdeanu, David McClosky, Julie Tibshirani, John Bauer, Angel X. Chang, Valentin I. Spitkovsky, Christopher D. Manning. 2010. A Simple Distant Supervision Approach for the TAC-KBP Slot Filling Task. Proc. TAC 2010 Workshop. Ani Thomas, Arpana Rawai, M K Kowar, Sanjay Sharma, Sarang Pitale and Neeraj Kharya. 2010. Bhilai Institute of Technology Durg at TAC 2010: Knowledge Base Population Task Challenge. Proc. TAC 2010 Workshop. Jingtao Yu, Omkar Mujgond and Rob Gaizauskas. 2010. The University of Sheffield System at TAC KBP 2010. Proc. TAC 2010 Workshop. Wei Zhang, Yan Chuan Sim, Jian Su and Chew Lim Tan. 2010. NUS-I2R: Learning a Combined System for Entity Linking. Proc. TAC 2010 Workshop. 1158
2011
115
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1159–1168, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Nonlinear Evidence Fusion and Propagation for Hyponymy Relation Mining Fan Zhang2* Shuming Shi1 Jing Liu2 Shuqi Sun3* Chin-Yew Lin1 1Microsoft Research Asia 2Nankai University, China 3Harbin Institute of Technology, China {shumings, cyl}@microsoft.com Abstract This paper focuses on mining the hyponymy (or is-a) relation from large-scale, open-domain web documents. A nonlinear probabilistic model is exploited to model the correlation between sentences in the aggregation of pattern matching results. Based on the model, we design a set of evidence combination and propagation algorithms. These significantly improve the result quality of existing approaches. Experimental results conducted on 500 million web pages and hypernym labels for 300 terms show over 20% performance improvement in terms of P@5, MAP and R-Precision. 1 Introduction1 An important task in text mining is the automatic extraction of entities and their lexical relations; this has wide applications in natural language processing and web search. This paper focuses on mining the hyponymy (or is-a) relation from largescale, open-domain web documents. From the viewpoint of entity classification, the problem is to automatically assign fine-grained class labels to terms. There have been a number of approaches (Hearst 1992; Pantel & Ravichandran 2004; Snow et al., 2005; Durme & Pasca, 2008; Talukdar et al., 2008) to address the problem. These methods typically exploited manually-designed or automatical * This work was performed when Fan Zhang and Shuqi Sun were interns at Microsoft Research Asia ly-learned patterns (e.g., “NP such as NP”, “NP like NP”, “NP is a NP”). Although some degree of success has been achieved with these efforts, the results are still far from perfect, in terms of both recall and precision. As will be demonstrated in this paper, even by processing a large corpus of 500 million web pages with the most popular patterns, we are not able to extract correct labels for many (especially rare) entities. Even for popular terms, incorrect results often appear in their label lists. The basic philosophy in existing hyponymy extraction approaches (and also many other textmining methods) is counting: count the number of supporting sentences. Here a supporting sentence of a term-label pair is a sentence from which the pair can be extracted via an extraction pattern. We demonstrate that the specific way of counting has a great impact on result quality, and that the state-ofthe-art counting methods are not optimal. Specifically, we examine the problem from the viewpoint of probabilistic evidence combination and find that the probabilistic assumption behind simple counting is the statistical independence between the observations of supporting sentences. By assuming a positive correlation between supporting sentence observations and adopting properly designed nonlinear combination functions, the results precision can be improved. It is hard to extract correct labels for rare terms from a web corpus due to the data sparseness problem. To address this issue, we propose an evidence propagation algorithm motivated by the observation that similar terms tend to share common hypernyms. For example, if we already know that 1) Helsinki and Tampere are cities, and 2) Porvoo is similar to Helsinki and Tampere, then Porvoo is 1159 very likely also a city. This intuition, however, does not mean that the labels of a term can always be transferred to its similar terms. For example, Mount Vesuvius and Kilimanjaro are volcanoes and Lhotse is similar to them, but Lhotse is not a volcano. Therefore we should be very conservative and careful in hypernym propagation. In our propagation algorithm, we first construct some pseudo supporting sentences for a term from the supporting sentences of its similar terms. Then we calculate label scores for terms by performing nonlinear evidence combination based on the (pseudo and real) supporting sentences. Such a nonlinear propagation algorithm is demonstrated to perform better than linear propagation. Experimental results on a publicly available collection of 500 million web pages with hypernym labels annotated for 300 terms show that our nonlinear evidence fusion and propagation significantly improve the precision and coverage of the extracted hyponymy data. This is one of the technologies adopted in our semantic search and mining system NeedleSeek2. In the next section, we discuss major related efforts and how they differ from our work. Section 3 is a brief description of the baseline approach. The probabilistic evidence combination model that we exploited is introduced in Section 4. Our main approach is illustrated in Section 5. Section 6 shows our experimental settings and results. Finally, Section 7 concludes this paper. 2 Related Work Existing efforts for hyponymy relation extraction have been conducted upon various types of data sources, including plain-text corpora (Hearst 1992; Pantel & Ravichandran, 2004; Snow et al., 2005; Snow et al., 2006; Banko, et al., 2007; Durme & Pasca, 2008; Talukdar et al., 2008), semistructured web pages (Cafarella et al., 2008; Shinzato & Torisawa, 2004), web search results (Geraci et al., 2006; Kozareva et al., 2008; Wang & Cohen, 2009), and query logs (Pasca 2010). Our target for optimization in this paper is the approaches that use lexico-syntactic patterns to extract hyponymy relations from plain-text corpora. Our future work will study the application of the proposed algorithms on other types of approaches. 2 http://research.microsoft.com/en-us/projects/needleseek/ or http://needleseek.msra.cn/ The probabilistic evidence combination model that we exploit here was first proposed in (Shi et al., 2009), for combining the page in-link evidence in building a nonlinear static-rank computation algorithm. We applied it to the hyponymy extraction problem because the model takes the dependency between supporting sentences into consideration and the resultant evidence fusion formulas are quite simple. In (Snow et al., 2006), a probabilistic model was adopted to combine evidence from heterogeneous relationships to jointly optimize the relationships. The independence of evidence was assumed in their model. In comparison, we show that better results will be obtained if the evidence correlation is modeled appropriately. Our evidence propagation is basically about using term similarity information to help instance labeling. There have been several approaches which improve hyponymy extraction with instance clusters built by distributional similarity. In (Pantel & Ravichandran, 2004), labels were assigned to the committee (i.e., representative members) of a semantic class and used as the hypernyms of the whole class. Labels generated by their approach tend to be rather coarse-grained, excluding the possibility of a term having its private labels (considering the case that one meaning of a term is not covered by the input semantic classes). In contrast to their method, our label scoring and ranking approach is applied to every single term rather than a semantic class. In addition, we also compute label scores in a nonlinear way, which improves results quality. In Snow et al. (2005), a supervised approach was proposed to improve hypernym classification using coordinate terms. In comparison, our approach is unsupervised. Durme & Pasca (2008) cleaned the set of instance-label pairs with a TF*IDF like method, by exploiting clusters of semantically related phrases. The core idea is to keep a term-label pair (T, L) only if the number of terms having the label L in the term T’s cluster is above a threshold and if L is not the label of too many clusters (otherwise the pair will be discarded). In contrast, we are able to add new (high-quality) labels for a term with our evidence propagation method. On the other hand, low quality labels get smaller score gains via propagation and are ranked lower. Label propagation is performed in (Talukdar et al., 2008; Talukdar & Pereira, 2010) based on multiple instance-label graphs. Term similarity information was not used in their approach. 1160 Most existing work tends to utilize small-scale or private corpora, whereas the corpus that we used is publicly available and much larger than most of the existing work. We published our term sets (refer to Section 6.1) and their corresponding user judgments so researchers working on similar topics can reproduce our results. Type Pattern Hearst-I NPL {,} (such as) {NP,}* {and|or} NP Hearst-II NPL {,} (include(s) | including) {NP,}* {and|or} NP Hearst-III NPL {,} (e.g.|e.g) {NP,}* {and|or} NP IsA-I NP (is|are|was|were|being) (a|an) NPL IsA-II NP (is|are|was|were|being) {the, those} NPL IsA-III NP (is|are|was|were|being) {another, any} NPL Table 1. Patterns adopted in this paper (NP: named phrase representing an entity; NPL: label) 3 Preliminaries The problem addressed in this paper is corpusbased is-a relation mining: extracting hypernyms (as labels) for entities from a large-scale, opendomain document corpus. The desired output is a mapping from terms to their corresponding hypernyms, which can naturally be represented as a weighted bipartite graph (term-label graph). Typically we are only interested in top labels of a term in the graph. Following existing efforts, we adopt patternmatching as a basic way of extracting hypernymy/hyponymy relations. Two types of patterns (refer to Table 1) are employed, including the popular “Hearst patterns” (Hearst, 1992) and the IsA patterns which are exploited less frequently in existing hyponym mining efforts. One or more termlabel pairs can be extracted if a pattern matches a sentence. In the baseline approach, the weight of an edge TL (from term T to hypernym label L) in the term-label graph is computed as, w(TL) ( ) ( ) (3.1) where m is the number of times the pair (T, L) is extracted from the corpus, DF(L) is the number of in-links of L in the graph, N is total number of terms in the graph, and IDF means the “inverse document frequency”. A term can only keep its top-k neighbors (according to the edge weight) in the graph as its final labels. Our pattern matching algorithm implemented in this paper uses part-of-speech (POS) tagging information, without adopting a parser or a chunker. The noun phrase boundaries (for terms and labels) are determined by a manually designed POS tag list. 4 Probabilistic Label-Scoring Model Here we model the hyponymy extraction problem from the probability theory point of view, aiming at estimating the score of a term-label pair (i.e., the score of a label w.r.t. a term) with probabilistic evidence combination. The model was studied in (Shi et al., 2009) to combine the page in-link evidence in building a nonlinear static-rank computation algorithm. We represent the score of a term-label pair by the probability of the label being a correct hypernym of the term, and define the following events, AT,L: Label L is a hypernym of term T (the abbreviated form A is used in this paper unless it is ambiguous). Ei: The observation that (T, L) is extracted from a sentence Si via pattern matching (i.e., Si is a supporting sentence of the pair). Assuming that we already know m supporting sentences (S1~Sm), our problem is to compute P(A|E1,E2,..,Em), the posterior probability that L is a hypernym of term T, given evidence E1~Em. Formally, we need to find a function f to satisfy, P(A|E1,…,Em) = f(P(A), P(A|E1)…, P(A|Em) ) (4.1) For simplicity, we first consider the case of m=2. The case of m>2 is quite similar. We start from the simple case of independent supporting sentences. That is, ( ) ( ) ( ) (4.2) ( ) ( ) ( ) (4.3) By applying Bayes rule, we get, ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) (4.4) Then define ( ) ( ) ( ) ( ( )) ( ( )) 1161 Here G(A|E) represents the log-probability-gain of A given E, with the meaning of the gain in the log-probability value of A after the evidence E is observed (or known). It is a measure of the impact of evidence E to the probability of event A. With the definition of G(A|E), Formula 4.4 can be transformed to, ( ) ( ) ( ) (4.5) Therefore, if E1 and E2 are independent, the logprobability-gain of A given both pieces of evidence will exactly be the sum of the gains of A given every single piece of evidence respectively. It is easy to prove (by following a similar procedure) that the above Formula holds for the case of m>2, as long as the pieces of evidence are mutually independent. Therefore for a term-label pair with m mutually independent supporting sentences, if we set every gain G(A|Ei) to be a constant value g, the posterior gain score of the pair will be ∑ . If the value g is the IDF of label L, the posterior gain will be, G(AT,L|E1…,Em) ∑ ( ) ( ) (4.6) This is exactly the Formula 3.1. By this way, we provide a probabilistic explanation of scoring the candidate labels for a term via simple counting. Hearst-I IsA-I E1: Hearst-I E2: IsA-I RA: ( ) ( ) ( ) 66.87 17.30 24.38 R: ( ) ( ) ( ) 5997 1711 802.7 RA/R 0.011 0.010 0.030 Table 2. Evidence dependency estimation for intrapattern and inter-pattern supporting sentences In the above analysis, we assume the statistical independence of the supporting sentence observations, which may not hold in reality. Intuitively, if we already know one supporting sentence S1 for a term-label pair (T, L), then we have more chance to find another supporting sentence than if we do not know S1. The reason is that, before we find S1, we have to estimate the probability with the chance of discovering a supporting sentence for a random term-label pair. The probability is quite low because most term-label pairs do not have hyponymy relations. Once we have observed S1, however, the chance of (T, L) having a hyponymy relation increases. Therefore the chance of observing another supporting sentence becomes larger than before. Table 2 shows the rough estimation of ( ) ( ) ( ) (denoted as RA), ( ) ( ) ( ) (denoted as R), and their ratios. The statistics are obtained by performing maximal likelihood estimation (MLE) upon our corpus and a random selection of term-label pairs from our term sets (see Section 6.1) together with their top labels3. The data verifies our analysis about the correlation between E1 and E2 (note that R=1 means independent). In addition, it can be seen that the conditional independence assumption of Formula 4.3 does not hold (because RA>1). It is hence necessary to consider the correlation between supporting sentences in the model. The estimation of Table 2 also indicates that, ( ) ( ) ( ) ( ) ( ) ( ) (4.7) By following a similar procedure as above, with Formulas 4.2 and 4.3 replaced by 4.7, we have, ( ) ( ) ( ) (4.8) This formula indicates that when the supporting sentences are positively correlated, the posterior score of label L w.r.t. term T (given both the sentences) is smaller than the sum of the gains caused by one sentence only. In the extreme case that sentence S2 fully depends on E1 (i.e. P(E2|E1)=1), it is easy to prove that ( ) ( ) It is reasonable, since event E2 does not bring in more information than E1. Formula 4.8 cannot be used directly for computing the posterior gain. What we really need is a function h satisfying ( ) ( ( ) ( )) (4.9) and ( ) ∑ (4.10) Shi et al. (2009) discussed other constraints to h and suggested the following nonlinear functions, ( ) ( ∑ ( ) ) (4.11) 3 RA is estimated from the labels judged as “Good”; whereas the estimation of R is from all judged labels. 1162 ( ) √∑ (p>1) (4.12) In the next section, we use the above two h functions as basic building blocks to compute label scores for terms. 5 Our Approach Multiple types of patterns (Table 1) can be adopted to extract term-label pairs. For two supporting sentences the correlation between them may depend on whether they correspond to the same pattern. In Section 5.1, our nonlinear evidence fusion formulas are constructed by making specific assumptions about the correlation between intra-pattern supporting sentences and inter-pattern ones. Then in Section 5.2, we introduce our evidence propagation technique in which the evidence of a (T, L) pair is propagated to the terms similar to T. 5.1 Nonlinear evidence fusion For a term-label pair (T, L), assuming K patterns are used for hyponymy extraction and the supporting sentences discovered with pattern i are, (5.1) where mi is the number of supporting sentences corresponding to pattern i. Also assume the gain score of Si,j is xi,j, i.e., xi,j=G(A|Si,j). Generally speaking, supporting sentences corresponding to the same pattern typically have a higher correlation than the sentences corresponding to different patterns. This can be verified by the data in Table-2. By ignoring the inter-pattern correlations, we make the following simplified assumption: Assumption: Supporting sentences corresponding to the same pattern are correlated, while those of different patterns are independent. According to this assumption, our label-scoring function is, ( ) ∑ ( ) (5.2) In the simple case that ( ), if the h function of Formula 4.12 is adopted, then, ( ) (∑√ ) ( ) (5.3) We use an example to illustrate the above formula. Example: For term T and label L1, assume the numbers of the supporting sentences corresponding to the six pattern types in Table 1 are (4, 4, 4, 4, 4, 4), which means the number of supporting sentences discovered by each pattern type is 4. Also assume the supporting-sentence-count vector of label L2 is (25, 0, 0, 0, 0, 0). If we use Formula 5.3 to compute the scores of L1 and L2, we can have the following (ignoring IDF for simplicity), Score(L1) √ ; Score(L2) √ One the other hand, if we simply count the total number of supporting sentences, the score of L2 will be larger. The rationale implied in the formula is: For a given term T, the labels supported by multiple types of patterns tend to be more reliable than those supported by a single pattern type, if they have the same number of supporting sentences. 5.2 Evidence propagation According to the evidence fusion algorithm described above, in order to extract term labels reliably, it is desirable to have many supporting sentences of different types. This is a big challenge for rare terms, due to their low frequency in sentences (and even lower frequency in supporting sentences because not all occurrences can be covered by patterns). With evidence propagation, we aim at discovering more supporting sentences for terms (especially rare terms). Evidence propagation is motivated by the following two observations: (I) Similar entities or coordinate terms tend to share some common hypernyms. (II) Large term similarity graphs are able to be built efficiently with state-of-the-art techniques (Agirre et al., 2009; Pantel et al., 2009; Shi et al., 2010). With the graphs, we can obtain the similarity between two terms without their hypernyms being available. The first observation motivates us to “borrow” the supporting sentences from other terms as auxiliary evidence of the term. The second observation means that new information is brought with the state-of-the-art term similarity graphs (in addition to the term-label information discovered with the patterns of Table 1). 1163 Our evidence propagation algorithm contains two phases. In phase I, some pseudo supporting sentences are constructed for a term from the supporting sentences of its neighbors in the similarity graph. Then we calculate the label scores for terms based on their (pseudo and real) supporting sentences. Phase I: For every supporting sentence S and every similar term T1 of the term T, add a pseudo supporting sentence S1 for T1, with the gain score, ( ) ( ) ( ) (5.5) where is the propagation factor, and ( ) is the term similarity function taking values in [0, 1]. The formula reasonably assumes that the gain score of the pseudo supporting sentence depends on the gain score of the original real supporting sentence, the similarity between the two terms, and the propagation factor. Phase II: The nonlinear evidence combination formulas in the previous subsection are adopted to combine the evidence of pseudo supporting sentences. Term similarity graphs can be obtained by distributional similarity or patterns (Agirre et al., 2009; Pantel et al., 2009; Shi et al., 2010). We call the first type of graph DS and the second type PB. DS approaches are based on the distributional hypothesis (Harris, 1985), which says that terms appearing in analogous contexts tend to be similar. In a DS approach, a term is represented by a feature vector, with each feature corresponding to a context in which the term appears. The similarity between two terms is computed as the similarity between their corresponding feature vectors. In PB approaches, a list of carefully-designed (or automatically learned) patterns is exploited and applied to a text collection, with the hypothesis that the terms extracted by applying each of the patterns to a specific piece of text tend to be similar. Two categories of patterns have been studied in the literature (Heast 1992; Pasca 2004; Kozareva et al., 2008; Zhang et al., 2009): sentence lexical patterns, and HTML tag patterns. An example of sentence lexical patterns is “T {, T}*{,} (and|or) T”. HTML tag patterns include HTML tables, drop-down lists, and other tag repeat patterns. In this paper, we generate the DS and PB graphs by adopting the best-performed methods studied in (Shi et al., 2010). We will compare, by experiments, the propagation performance of utilizing the two categories of graphs, and also investigate the performance of utilizing both graphs for evidence propagation. 6 Experiments 6.1 Experimental setup Corpus We adopt a publicly available dataset in our experiments: ClueWeb094. This is a very large dataset collected by Carnegie Mellon University in early 2009 and has been used by several tracks of the Text Retrieval Conference (TREC)5. The whole dataset consists of 1.04 billion web pages in ten languages while only those in English, about 500 million pages, are used in our experiments. The reason for selecting such a dataset is twofold: First, it is a corpus large enough for conducting webscale experiments and getting meaningful results. Second, since it is publicly available, it is possible for other researchers to reproduce the experiments in this paper. Term sets Approaches are evaluated by using two sets of selected terms: Wiki200, and Ext100. For every term in the term sets, each approach generates a list of hypernym labels, which are manually judged by human annotators. Wiki200 is constructed by first randomly selecting 400 Wikipedia6 titles as our candidate terms, with the probability of a title T being selected to be ( ( )), where F(T) is the frequency of T in our data corpus. The reason of adopting such a probability formula is to balance popular terms and rare ones in our term set. Then 200 terms are manually selected from the 400 candidate terms, with the principle of maximizing the diversity of terms in terms of length (i.e., number of words) and type (person, location, organization, software, movie, song, animal, plant, etc.). Wiki200 is further divided into two subsets: Wiki100H and Wiki100L, containing respectively the 100 high-frequency and lowfrequency terms. Ext100 is built by first selecting 200 non-Wikipedia-title terms at random from the term-label graph generated by the baseline approach (Formula 3.1), then manually selecting 100 terms. Some sample terms in the term sets are listed in Table 3. 4 http://boston.lti.cs.cmu.edu/Data/clueweb09/ 5 http://trec.nist.gov/ 6 http://www.wikipedia.org/ 1164 Term Set Sample Terms Wiki200 Canon EOS 400D, Disease management, El Salvador, Excellus Blue Cross Blue Shield, F33, Glasstron, Indium, Khandala, Kung Fu, Lake Greenwood, Le Gris, Liriope, Lionel Barrymore, Milk, Mount Alto, Northern Wei, Pink Lady, Shawshank, The Dog Island, White flight, World War II… Ext100 A2B, Antique gold, GPTEngine, Jinjiang Inn, Moyea SWF to Apple TV Converter, Nanny service, Outdoor living, Plasmid DNA, Popon, Spam detection, Taylor Ho Bynum, Villa Michelle… Table 3. Sample terms in our term sets Annotation For each term in the term set, the top-5 results (i.e., hypernym labels) of various methods are mixed and judged by human annotators. Each annotator assigns each result item a judgment of “Good”, “Fair” or “Bad”. The annotators do not know the method by which a result item is generated. Six annotators participated in the labeling with a rough speed of 15 minutes per term. We also encourage the annotators to add new good results which are not discovered by any method. The term sets and their corresponding user annotations are available for download at the following links (dataset ID=data.queryset.semcat01): http://research.microsoft.com/en-us/projects/needleseek/ http://needleseek.msra.cn/datasets/ Evaluation We adopt the following metrics to evaluate the hypernym list of a term generated by each method. The evaluation score on a term set is the average over all the terms. Precision@k: The percentage of relevant (good or fair) labels in the top-k results (labels judged as “Fair” are counted as 0.5) Recall@k: The ratio of relevant labels in the topk results to the total number of relevant labels R-Precision: Precision@R where R is the total number of labels judged as “Good” Mean average precision (MAP): The average of precision values at the positions of all good or fair results Before annotation and evaluation, the hypernym list generated by each method for each term is preprocessed to remove duplicate items. Two hypernyms are called duplicate items if they share the same head word (e.g., “military conflict” and “conflict”). For duplicate hypernyms, only the first (i.e., the highest ranked one) in the list is kept. The goal with such a preprocessing step is to partially consider results diversity in evaluation and to make a more meaningful comparison among different methods. Consider two hypernym lists for “subway”: List-1: restaurant; chain restaurant; worldwide chain restaurant; franchise; restaurant franchise… List-2: restaurant; franchise; transportation; company; fast food… There are more detailed hypernyms in the first list about “subway” as a restaurant or a franchise; while the second list covers a broader range of meanings for the term. It is hard to say which is better (without considering the upper-layer applications). With this preprocessing step, we keep our focus on short hypernyms rather than detailed ones. Term Set Method MAP R-Prec P@1 P@5 Wiki200 Linear 0.357 0.376 0.783 0.547 Log 0.371 3.92% 0.384 2.13% 0.803 2.55% 0.561 2.56% PNorm 0.372 4.20% 0.384 2.13% 0.800 2.17% 0.562 2.74% Wiki100H Linear 0.363 0.382 0.805 0.627 Log 0.393 8.26% 0.402 5.24% 0.845 4.97% 0.660 5.26% PNorm 0.395 8.82% 0.403 5.50% 0.840 4.35% 0.662 5.28% Table 4. Performance comparison among various evidence fusion methods (Term sets: Wiki200 and Wiki100H; p=2 for PNorm) 6.2 Experimental results We first compare the evaluation results of different evidence fusion methods mentioned in Section 4.1. In Table 4, Linear means that Formula 3.1 is used to calculate label scores, whereas Log and PNorm represent our nonlinear approach with Formulas 4.11 and 4.12 being utilized. The performance improvement numbers shown in the table are based on the linear version; and the upward pointing arrows indicate relative percentage improvement over the baseline. From the table, we can see that the nonlinear methods outperform the linear ones on the Wiki200 term set. It is interesting to note that the performance improvement is more significant on Wiki100H, the set of high frequency terms. By examining the labels and supporting sentences for the terms in each term set, we find that for many low-frequency terms (in Wiki100L), there are only a few supporting sentences (corresponding 1165 to one or two patterns). So the scores computed by various fusion algorithms tend to be similar. In contrast, more supporting sentences can be discovered for high-frequency terms. Much information is contained in the sentences about the hypernyms of the high-frequency terms, but the linear function of Formula 3.1 fails to make effective use of it. The two nonlinear methods achieve better performance by appropriately modeling the dependency between supporting sentences and computing the log-probability gain in a better way. The comparison of the linear and nonlinear methods on the Ext100 term set is shown in Table 5. Please note that the terms in Ext100 do not appear in Wikipedia titles. Thanks to the scale of the data corpus we are using, even the baseline approach achieves reasonably good performance. Please note that the terms (refer to Table 3) we are using are “harder” than those adopted for evaluation in many existing papers. Again, the results quality is improved with the nonlinear methods, although the performance improvement is not big due to the reason that most terms in Ext100 are rare. Please note that the recall (R@1, R@5) in this paper is pseudo-recall, i.e., we treat the number of known relevant (Good or Fair) results as the total number of relevant ones. Method MAP R-Prec P@1 P@5 R@1 R@5 Linear 0.384 0.429 0.665 0.472 0.116 0.385 Log 0.395 0.429 0.715 0.472 0.125 0.385 2.86% 0% 7.52% 0% 7.76% 0% PNorm 0.390 0.429 0.700 0.472 0.120 0.385 1.56% 0% 5.26% 0% 3.45% 0% Table 5. Performance comparison among various evidence fusion methods (Term set: Ext100; p=2 for PNorm) The parameter p in the PNorm method is related to the degree of correlations among supporting sentences. The linear method of Formula 3.1 corresponds to the special case of p=1; while p= represents the case that other supporting sentences are fully correlated to the supporting sentence with the maximal log-probability gain. Figure 1 shows that, for most of the term sets, the best performance is obtained for [2.0, 4.0]. The reason may be that the sentence correlations are better estimated with p values in this range. Figure 1. Performance curves of PNorm with different parameter values (Measure: MAP) The experimental results of evidence propagation are shown in Table 6. The methods for comparison are, Base: The linear function without propagation. NL: Nonlinear evidence fusion (PNorm with p=2) without propagation. LP: Linear propagation, i.e., the linear function is used to combine the evidence of pseudo supporting sentences. NLP: Nonlinear propagation where PNorm (p=2) is used to combine the pseudo supporting sentences. NL+NLP: The nonlinear method is used to combine both supporting sentences and pseudo supporting sentences. Method MAP R-Prec P@1 P@5 R@5 Base 0.357 0.376 0.783 0.547 0.317 NL 0.372 0.384 0.800 0.562 0.325 4.20% 2.13% 2.17% 2.74% 2.52% LP 0.357 0.376 0.783 0.547 0.317 0% 0% 0% 0% 0% NLP 0.396 0.418 0.785 0.605 0.357 10.9% 11.2% 0.26% 10.6% 12.6% NL+NLP 0.447 0.461 0.840 0.667 0.404 25.2% 22.6% 7.28% 21.9% 27.4% Table 6. Evidence propagation results (Term set: Wiki200; Similarity graph: PB; Nonlinear formula: PNorm) In this paper, we generate the DS (distributional similarity) and PB (pattern-based) graphs by adopting the best-performed methods studied in (Shi et al., 2010). The performance improvement numbers (indicated by the upward pointing arrows) shown in tables 6~9 are relative percentage improvement 1166 over the base approach (i.e., linear function without propagation). The values of parameter are set to maximize the MAP values. Several observations can be made from Table 6. First, no performance improvement can be obtained with the linear propagation method (LP), while the nonlinear propagation algorithm (NLP) works quite well in improving both precision and recall. The results demonstrate the high correlation between pseudo supporting sentences and the great potential of using term similarity to improve hypernymy extraction. The second observation is that the NL+NLP approach achieves a much larger performance improvement than NL and NLP. Similar results (omitted due to space limitation) can be observed on the Ext100 term set. Method MAP R-Prec P@1 P@5 R@5 Base 0.357 0.376 0.783 0.547 0.317 NL+NLP (PB) 0.415 0.439 0.830 0.633 0.379 16.2% 16.8% 6.00% 15.7% 19.6% NL+NLP (DS) 0.456 0.469 0.843 0.673 0.406 27.7% 24.7% 7.66% 23.0% 28.1% NL+NLP (PB+DS) 0.473 0.487 0.860 0.700 0.434 32.5% 29.5% 9.83% 28.0% 36.9% Table 7. Combination of PB and DS graphs for evidence propagation (Term set: Wiki200; Nonlinear formula: Log) Method MAP R-Prec P@1 P@5 R@5 Base 0.351 0.370 0.760 0.467 0.317 NL+NLP (PB) 0.411 0.448 0.770 0.564 0.401 ↑17.1% ↑21.1% ↑1.32% ↑20.8% ↑26.5% NL+NLP (DS) 0.469 0.490 0.815 0.622 0.438 33.6% 32.4% 7.24% 33.2% 38.2% NL+NLP (PB+DS) 0.491 0.513 0.860 0.654 0.479 39.9% 38.6% 13.2% 40.0% 51.1% Table 8. Combination of PB and DS graphs for evidence propagation (Term set: Wiki100L) Now let us study whether it is possible to combine the PB and DS graphs to obtain better results. As shown in Tables 7, 8, and 9 (for term sets Wiki200, Wiki100L, and Ext100 respectively, using the Log formula for fusion and propagation), utilizing both graphs really yields additional performance gains. We explain this by the fact that the information in the two term similarity graphs tends to be complimentary. The performance improvement over Wiki100L is especially remarkable. This is reasonable because rare terms do not have adequate information in their supporting sentences due to data sparseness. As a result, they benefit the most from the pseudo supporting sentences propagated with the similarity graphs. Method MAP R-Prec P@1 P@5 R@5 Base 0.384 0.429 0.665 0.472 0.385 NL+NLP (PB) 0.454 0.479 0.745 0.550 0.456 18.3% 11.7% 12.0% 16.5% 18.4% NL+NLP (DS) 0.404 0.441 0.720 0.486 0.402 5.18% 2.66% 8.27% 2.97% 4.37% NL+NLP(P B+DS) 0.483 0.518 0.760 0.586 0.492 26.0% 20.6% 14.3% 24.2% 27.6% Table 9. Combination of PB and DS graphs for evidence propagation (Term set: Ext100) 7 Conclusion We demonstrated that the way of aggregating supporting sentences has considerable impact on results quality of the hyponym extraction task using lexico-syntactic patterns, and the widely-used counting method is not optimal. We applied a series of nonlinear evidence fusion formulas to the problem and saw noticeable performance improvement. The data quality is improved further with the combination of nonlinear evidence fusion and evidence propagation. We also introduced a new evaluation corpus with annotated hypernym labels for 300 terms, which were shared with the research community. Acknowledgments We would like to thank Matt Callcut for reading through the paper. Thanks to the annotators for their efforts in judging the hypernym labels. Thanks to Yueguo Chen, Siyu Lei, and the anonymous reviewers for their helpful comments and suggestions. The first author is partially supported by the NSF of China (60903028,61070014), and Key Projects in the Tianjin Science and Technology Pillar Program. 1167 References E. Agirre, E. Alfonseca, K. Hall, J. Kravalova, M. Pasca, and A. Soroa. 2009. A Study on Similarity and Relatedness Using Distributional and WordNet-based Approaches. In Proc. of NAACL-HLT’2009. M. Banko, M.J. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. 2007. Open Information Extraction from the Web. In Proc. of IJCAI’2007. M. Cafarella, A. Halevy, D. Wang, E. Wu, and Y. Zhang. 2008. WebTables: Exploring the Power of Tables on the Web. In Proceedings of the 34th Conference on Very Large Data Bases (VLDB’2008), pages 538–549, Auckland, New Zealand. B. Van Durme and M. Pasca. 2008. Finding cars, goddesses and enzymes: Parametrizable acquisition of labeled instances for open-domain information extraction. Twenty-Third AAAI Conference on Artificial Intelligence. F. Geraci, M. Pellegrini, M. Maggini, and F. Sebastiani. 2006. Cluster Generation and Cluster Labelling for Web Snippets: A Fast and Accurate Hierarchical Solution. In Proceedings of the 13th Conference on String Processing and Information Retrieval (SPIRE’2006), pages 25–36, Glasgow, Scotland. Z. S. Harris. 1985. Distributional Structure. The Philosophy of Linguistics. New York: Oxford University Press. M. Hearst. 1992. Automatic Acquisition of Hyponyms from Large Text Corpora. In Fourteenth International Conference on Computational Linguistics, Nantes, France. Z. Kozareva, E. Riloff, E.H. Hovy. 2008. Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs. In Proc. of ACL'2008. P. Pantel, E. Crestan, A. Borkovsky, A.-M. Popescu and V. Vyas. 2009. Web-Scale Distributional Similarity and Entity Set Expansion. EMNLP’2009. Singapore. P. Pantel and D. Ravichandran. 2004. Automatically Labeling Semantic Classes. In Proc. of the 2004 Human Language Technology Conference (HLTNAACL’2004), 321–328. M. Pasca. 2004. Acquisition of Categorized Named Entities for Web Search. In Proc. of CIKM’2004. M. Pasca. 2010. The Role of Queries in Ranking Labeled Instances Extracted from Text. In Proc. of COLING’2010, Beijing, China. S. Shi, B. Lu, Y. Ma, and J.-R. Wen. 2009. Nonlinear Static-Rank Computation. In Proc. of CIKM’2009, Kong Kong. S. Shi, H. Zhang, X. Yuan, J.-R. Wen. 2010. Corpusbased Semantic Class Mining: Distributional vs. Pattern-Based Approaches. In Proc. of COLING’2010, Beijing, China. K. Shinzato and K. Torisawa. 2004. Acquiring Hyponymy Relations from Web Documents. In Proc. of the 2004 Human Language Technology Conference (HLT-NAACL’2004). R. Snow, D. Jurafsky, and A. Y. Ng. 2005. Learning Syntactic Patterns for Automatic Hypernym Discovery. In Proceedings of the 19th Conference on Neural Information Processing Systems. R. Snow, D. Jurafsky, and A. Y. Ng. 2006. Semantic Taxonomy Induction from Heterogenous Evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL-06), 801–808. P. P. Talukdar and F. Pereira. 2010. Experiments in Graph-based Semi-Supervised Learning Methods for Class-Instance Acquisition. In 48th Annual Meeting of the Association for Computational Linguistics (ACL’2010). P. P. Talukdar, J. Reisinger, M. Pasca, D. Ravichandran, R. Bhagat, and F. Pereira. 2008. Weakly-Supervised Acquisition of Labeled Class Instances using Graph Random Walks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP’2008), pages 581–589. R.C. Wang. W.W. Cohen. Automatic Set Instance Extraction using the Web. In Proc. of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP’2009), pages 441–449, Singapore. H. Zhang, M. Zhu, S. Shi, and J.-R. Wen. 2009. Employing Topic Models for Pattern-based Semantic Class Discovery. In Proc. of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP’2009), pages 441–449, Singapore. 1168
2011
116
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1169–1178, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Pronoun Anaphora Resolution System based on Factorial Hidden Markov Models Dingcheng Li University of Minnesota, Twin Cities, Minnesosta [email protected] Tim Miller University of Wisconsin Milwaukee, Wisconsin [email protected] William Schuler The Ohio State University Columbus, Ohio [email protected] Abstract This paper presents a supervised pronoun anaphora resolution system based on factorial hidden Markov models (FHMMs). The basic idea is that the hidden states of FHMMs are an explicit short-term memory with an antecedent buffer containing recently described referents. Thus an observed pronoun can find its antecedent from the hidden buffer, or in terms of a generative model, the entries in the hidden buffer generate the corresponding pronouns. A system implementing this model is evaluated on the ACE corpus with promising performance. 1 Introduction Pronoun anaphora resolution is the task of finding the correct antecedent for a given pronominal anaphor in a document. It is a subtask of coreference resolution, which is the process of determining whether two or more linguistic expressions in a document refer to the same entity. Adopting terminology used in the Automatic Context Extraction (ACE) program (NIST, 2003), these expressions are called mentions. Each mention is a reference to some entity in the domain of discourse. Mentions usually fall into three categories – proper mentions (proper names), nominal mentions (descriptions), and pronominal mentions (pronouns). There is a great deal of related work on this subject, so the descriptions of other systems below are those which are most related or which the current model has drawn insight from. Pairwise models (Yang et al., 2004; Qiu et al., 2004) and graph-partitioning methods (McCallum and Wellner, 2003) decompose the task into a collection of pairwise or mention set coreference decisions. Decisions for each pair or each group of mentions are based on probabilities of features extracted by discriminative learning models. The aforementioned approaches have proven to be fruitful; however, there are some notable problems. Pairwise modeling may fail to produce coherent partitions. That is, if we link results of pairwise decisions to each other, there may be conflicting coreferences. Graph-partitioning methods attempt to reconcile pairwise scores into a final coherent clustering, but they are combinatorially harder to work with in discriminative approaches. One line of research aiming at overcoming the limitation of pairwise models is to learn a mentionranking model to rank preceding mentions for a given anaphor (Denis and Baldridge, 2007) This approach results in more coherent coreference chains. Recent years have also seen the revival of interest in generative models in both machine learning and natural language processing. Haghighi and Klein (2007), proposed an unsupervised nonparametric Bayesian model for coreference resolution. In contrast to pairwise models, this fully generative model produces each mention from a combination of global entity properties and local attentional state. Ng (2008) did similar work using the same unsupervised generative model, but relaxed head generation as head-index generation, enforced agreement constraints at the global level, and assigned salience only to pronouns. Another unsupervised generative model was recently presented to tackle only pronoun anaphora 1169 resolution (Charniak and Elsner, 2009). The expectation-maximization algorithm (EM) was applied to learn parameters automatically from the parsed version of the North American News Corpus (McClosky et al., 2008). This model generates a pronoun’s person, number and gender features along with the governor of the pronoun and the syntactic relation between the pronoun and the governor. This inference process allows the system to keep track of multiple hypotheses through time, including multiple different possible histories of the discourse. Haghighi and Klein (2010) improved their nonparametric model by sharing lexical statistics at the level of abstract entity types. Consequently, their model substantially reduces semantic compatibility errors. They report the best results to date on the complete end-to-end coreference task. Further, this model functions in an online setting at mention level. Namely, the system identifies mentions from a parse tree and resolves resolution with a left-to-right sequential beam search. This is similar to Luo (2005) where a Bell tree is used to score and store the searching path. In this paper, we present a supervised pronoun resolution system based on Factorial Hidden Markov Models (FHMMs). This system is motivated by human processing concerns, by operating incrementally and maintaining a limited short term memory for holding recently mentioned referents. According to Clark and Sengul (1979), anaphoric definite NPs are much faster retrieved if the antecedent of a pronoun is in immediately previous sentence. Therefore, a limited short term memory should be good enough for resolving the majority of pronouns. In order to construct an operable model, we also measured the average distance between pronouns and their antecedents as discussed in next sections and used distances as important salience features in the model. Second, like Morton (2000), the current system essentially uses prior information as a discourse model with a time-series manner, using a dynamic programming inference algorithm. Third, the FHMM described here is an integrated system, in contrast with (Haghighi and Klein, 2010). The model generates part of speech tags as simple structural information, as well as related semantic information at each time step or word-by-word step. While the framework described here can be extended to deeper structural information, POS tags alone are valuable as they can be used to incorporate the binding features (described below). Although the system described here is evaluated for pronoun resolution, the framework we describe can be extended to more general coreference resolution in a fairly straightforward manner. Further, as in other HMM-based systems, the system can be either supervised or unsupervised. But extensions to unsupervised learning are left for future work. The final results are compared with a few supervised systems as the mention-ranking model (Denis and Baldridge, 2007) and systems compared in their paper, and Charniak and Elsner’s (2009) unsupervised system, emPronouns. The FHMM-based pronoun resolution system does a better job than the global ranking technique and other approaches. This is a promising start for this novel FHMM-based pronoun resolution system. 2 Model Description This work is based on a graphical model framework called Factorial Hidden Markov Models (FHMMs). Unlike the more commonly known Hidden Markov Model (HMM), in an FHMM the hidden state at each time step is expanded to contain more than one random variable (as shown in Figure 1). This allows for the use of more complex hidden states by taking advantage of conditional independence between substates. This conditional independence allows complex hidden states to be learned with limited training data. 2.1 Factorial Hidden Markov Model Factorial Hidden Markov Models are an extension of HMMs (Ghahramani and Jordan, 1997). HMMs represent sequential data as a sequence of hidden states generating observation states (words in this case) at corresponding time steps t. A most likely sequence of hidden states can then be hypothesized given any sequence of observed states, using Bayes Law (Equation 2) and Markov independence assumptions (Equation 3) to define a full probability as the product of a Transition Model (ΘT ) prior probability and an Observation Model (ΘO) likelihood 1170 probability. ˆh1..T def = argmax h1..T P(h1..T | o1..T ) (1) def = argmax h1..T P(h1..T ) · P(o1..T | h1..T ) (2) def = argmax h1..T T Y t=1 PΘT (ht | ht−1) · PΘO(ot | ht) (3) For a simple HMM, the hidden state corresponding to each observation state only involves one variable. An FHMM contains more than one hidden variable in the hidden state. These hidden substates are usually layered processes that jointly generate the evidence. In the model described here, the substates are also coupled to allow interaction between the separate processes. As Figure 1 shows, the hidden states include three sub-states, op, cr and pos which are short forms of operation, coreference feature and part-of-speech. Then, the transition model expands the left term in (3) to (4). PΘT (ht | ht−1) def = P(opt | opt−1, post−1) ·P(crt | crt−1, opt−1) ·P(post | opt, post−1) (4) The observation model expands from the right term in (3) to (5). PΘO(ot | ht) def = P(ot | post, crt) (5) The observation state depends on more than one hidden state at each time step in an FHMM. Each hidden variable can be further split into smaller variables. What these terms stand for and the motivations behind the above equations will be explained in the next section. 2.2 Modeling a Coreference Resolver with FHMMs FHMMs in our model, like standard HMMs, cannot represent the hierarchical structure of a syntactic phrase. In order to partially represent this information, the head word is used to represent the whole noun phrase. After coreference is resolved, the coreferring chain can then be expanded to the whole phrase with NP chunker tools. In this system, hidden states are composed of three main variables: a referent operation (OP), coreference features (CR) and part of speech tags (POS) as displayed in Figure 1. The transition model is defined as Equation 4. opt-1= copy post-1= VBZ ot-1=loves et-1= per,org gt-1= neu,fem crt-1 opt= old post= PRP ot=them gt= fem,neu crt ht-1 ht et= org,per nt-1= plu,sing nt= sing,plu it-1= -,2 it= 0,2 Figure 1: Factorial HMM CR Model The starting point for the hidden state at each time step is the OP variable, which determines which kind of referent operations will occur at the current word. Its domain has three possible states: none, new and old. The none state indicates that the present state will not generate a mention. All previous hidden state values (the list of previous mentions) will be passed deterministically (with probability 1) to the current time step without any changes. The new state signifies that there is a new mention in the present time step. In this event, a new mention will be added to the entity set, as represented by its set of feature values and position in the coreference table. The old state indicates that there is a mention in the present time state and that this mention refers back to some antecedent mention. In such a case, the list of entities in the buffer will be reordered deterministically, moving the currently mentioned entity to the top of the list. Notice that opt is defined to depend on opt−1 and post−1. This is sometimes called a switching FHMM (Duh, 2005). This dependency can be useful, for example, if opt−1 is new, in which case opt has a higher probability of being none or old. If 1171 post−1 is a verb or preposition, opt has more probability of being old or new. One may wonder why opt generates post, and not the other way around. This model only roughly models the process of (new and old) entity generation, and either direction of causality might be consistent with a model of human entity generation, but this direction of causality is chosen to represent the effect of semantics (referents) generating syntax (POS tags). In addition, this is a joint model in which POS tagging and coreference resolution are integrated together, so the best combination of those hidden states will be computed in either case. 2.3 Coreference Features Coreference features for this model refer to features that may help to identify co-referring entities. In this paper, they mainly include index (I), named entity type (E), number (N) and gender (G). The index feature represents the order that a mention was encountered relative to the other mentions in the buffer. The latter three features are well known and described elsewhere, and are not themselves intended as the contribution of this work. The novel aspect of this part of the model is the fact that the features are carried forward, updated after every word, and essentially act as a discourse model. The features are just a shorthand way of representing some well known essential aspects of a referent (as pertains to anaphora resolution) in a discourse model. Features Values I positive integers from 1. . .n G male, female, neutral, unknown N singular, plural, unknown E person, location, organization, GPE, vehicle, company, facility Table 1: Coreference features stored with each mention. Unlike discriminative approaches, generative models like the FHMM described here do not have access to all observations at once. This model must then have a mechanism for jointly considering pronouns in tandem with previous mentions, as well as the features of those mentions that might be used to find matches between pronouns and antecedents. Further, higher order HMMs may contain more accurate information about observation states. This is especially true for coreference resolution because pronouns often refer back to mentions that are far away from the present state. In this case, we would need to know information about mentions which are at least two mentions before the present one. In this sense, a higher order HMM may seem ideal for coreference resolution. However, higher order HMMs will quickly become intractable as the order increases. In order to overcome these limitations, two strategies which have been discussed in the last section are taken: First, a switching variable called OP is designed (as discussed in last section); second, a memory of recently mentioned entities is maintained to store features of mentions and pass them forward incrementally. OP is intended to model the decision to use the current word to introduce a new referent (new), refer to an antecedent (old), or neither (none). The entity buffer is intended to model the set of ‘activated’ entities in the discourse – those which could plausibly be referred to with a pronoun. These designs allow similar benefits as longer dependencies of higherorder HMMs but avoid the problem of intractability. The number of mentions maintained must be limited in order for the model to be tractable. Fortunately, human short term memory faces effectively similar limitations and thus pronouns usually refer back to mentions not very far away. Even so, the impact of the size of the buffer on decoding time may be a concern. Since the buffer of our system will carry forward a few previous groups of coreference features plus op and pos, the computational complexity will be exorbitantly high if we keep high beam size and meanwhile if each feature interacts with others. Luckily, we have successfully reduced the intractability to a workable system in both speed and space with following methods. First, we estimate the size of buffer with a simple count of average distances between pronouns and their antecedents in the corpus. It is found that about six is enough for covering 99.2% of all pronouns. Secondly, the coreference features we have used have the nice property of being independent from one another. One might expect English non-person entities to almost always have neutral gender, and 1172 thus be modeled as follows: P(et, gt | et−1, gt−1) = P(gt | gt−1, et) · P(et | et−1) (6) However, a few considerations made us reconsider. First, exceptions are found in the corpus. Personal pronouns such as she or he are used to refer to country, regions, states or organizations. Second, existing model files made by Bergsma (2005) include a large number of non-neutral gender information for nonperson words. We employ these files for acquiring gender information of unknown words. If we use Equation 6, sparsity and complexity will increase. Further, preliminary experiments have shown models using an independence assumption between gender and personhood work better. Thus, we treat each coreference feature as an independent event. Hence, we can safely split coreference features into separate parts. This way dramatically reduces the model complexity. Thirdly, our HMM decoding uses the Viterbi algorithm with A-star beam search. The probability of the new state of the coreference table P(crt | crt−1, opt) is defined to be the product of probabilities of the individual feature transitions. P(crt | crt−1, opt) = P(it | it−1, opt)· P(et | et−1, it, opt)· P(gt | gt−1, it, opt)· P(nt | nt−1, it, opt) (7) This supposes that the features are conditionally independent of each other given the index variable, the operator and previous instance. Each feature only depends on the operator and the corresponding feature at the previous state, with that set of features re-ordered as specified by the index model. 2.4 Feature Passing Equation 7 is correct and complete, but in fact the switching variable for operation type results in three different cases which simplifies the calculation of the transition probabilities for the coreference feature table. Note the following observations about coreference features: it only needs a probabilistic model when opt is old – in other words, only when the model must choose between several antecedents to re-refer to. gt, et and nt are deterministic except when opt is new, when gender, entity type, and number information must be generated for the new entity being introduced. When opt is none, all coreference variables (entity features) will be copied over from the previous time step to the current time step, and the probability of this transition is 1.0. When opt is new, it is changed deterministically by adding the new entity to the first position in the list and moving every other entity down one position. If the list of entities is full, the least recently mentioned entity will be discarded. The values for the top of the feature lists gt, et, and nt will then be generated from featurespecific probability distributions estimated from the training data. When opt is old, it will probabilistically select a value 1 . . . n, for an entity list containing n items. The selected value will deterministically order the gt, nt and et lists. This distribution is also estimated from training data, and takes into account recency of mention. The shape of this distribution varies slightly depending on list size and noise in the training data, but in general the probability of a mention being selected is directly correlated to how recently it was mentioned. With this understanding, coreference table transition probabilities can be written in terms of only their non-deterministic substate distributions: P(crt | crt−1, old) = Pold(it | it−1)· Preorder(et | et−1, it)· Preorder(gt | gt−1, it)· Preorder(nt | nt−1, it) (8) where the old model probabilistically selects the antecedent and moves it to the top of the list as described above, thus deciding how the reordering will take place. The reorder model actually implements the list reordering for each independent feature by moving the feature value corresponding to the selected entity in the index model to the top of that feature’s list. The overall effect is simply the probabilistic reordering of entities in a list, where each entity is defined as a label and a set of features. P(crt | crt−1, new) = Pnew(it | it−1)· Pnew(gt | gt−1)· Pnew(nt | nt−1)· Pnew(et | et−1) (9) where the new model probabilistically generates a 1173 feature value based on the training data and puts it at the top of the list, moves every other entity down one position in the list, and removes the final item if the list is already full. Each entity in i takes a value from 1 to n for a list of size n. Each g can be one of four values – male, female, neuter and unknown; n one of three values – plural, singular and unknown and e around eight values. Note that post is used in both hidden states and observation states. While it is not considered a coreference feature as such, it can still play an important role in the resolving process. Basically, the system tags parts of speech incrementally while simultaneously resolving pronoun anaphora. Meanwhile, post−1 and opt−1 will jointly generate opt. This point has been discussed in Section 2.2. Importantly, the pos model can help to implement binding principles (Chomsky, 1981). It is applied when opt is old. In training, pronouns are sub-categorised into personal pronouns, reflexive and other-pronoun. We then define a variable loct whose value is how far back in the list of antecedents the current hypothesis must have gone to arrive at the current value of it. If we have the syntax annotations or parsed trees, then, the part of speech model can be defined when opt is old as Pbinding(post | loct, sloct). For example, if post ∈reflexive, P(post | loct, sloct) where loct has smaller values (implying closer mentions to post) and sloct = subject should have higher values since reflexive pronouns always refer back to subjects within its governing domains. This was what (Haghighi and Klein, 2009) did and we did this in training with the REUTERS corpus (Hasler et al., 2006) in which syntactic roles are annotated. We finally switched to the ACE corpus for the purpose of comparison with other work. In the ACE corpus, no syntactic roles are annotated. We did use the Stanford parser to extract syntactic roles from the ACE corpus. But the result is largely affected by the parsing accuracy. Again, for a fair comparison, we extract similar features to Denis and Baldridge (2007), which is the model we mainly compare with. They approximate syntactic contexts with POS tags surrounding the pronoun. Inspired by this idea, we successfully represent binding features with POS tags before anaphors. Instead of using P(post | loct, sloct), we train P(post | loct, posloct) which can play the role of binding. For example, suppose the buffer size is 6 and loct = 5, posloct = noun. Then, P(post = reflexive | loct, posloct) is usually higher than P(post = pronoun | loct, posloct), since the reflexive has a higher probability of referring back to the noun located in position 5 than the pronoun. In future work expanding to coreference resolution between any noun phrases we intend to integrate syntax into this framework as a joint model of coreference resolution and parsing. 3 Observation Model The observation model that generates an observed state is defined as Equation 5. To expand that equation in detail, the observation state, the word, depends on its part of speech and its coreference features as well. Since FHMMs are generative, we can say part of speech and coreference features generate the word. In actual implementation, the observed model will be very sparse, since crt will be split into more variables according to how many coreference features it is composed of. In order to avoid the sparsity, we transform the equation with Bayes’ law as follows. PΘO(ot | ht) = P(ot) · P(ht | ot) P o′ P(o′)P(ht | o′) (10) = P(ot) · P(post, crt | ot) P o′ P(o′)P(post, crt | o′) (11) We define pos and cr to be independent of each other, so we can further split the above equation as: PΘO(ot | ht) def = P(ot) · P(post | ot) · P(crt | ot) P o′ P(o′) · P(post | o′) · P(crt | o′) (12) where P(crt | ot) = P(gt | ot)P(nt | ot)P(et | ot) and P(crt | o′) = P(gt | o′)P(nt | o′)P(et | o′). This change transforms the FHMM to a hybrid FHMM since the observation model no longer generates the data. Instead, the observation model generates hidden states, which is more a combination of discriminative and generative approaches. This way facilitates building likelihood model files of features for given mentions from the training data. The 1174 hidden state transition model represents prior probabilities of coreference features associated with each while this observation model factors in the probability given a pronoun. 3.1 Unknown Words Processing If an observed word was not seen in training, the distribution of its part of speech, gender, number and entity type will be unknown. In this case, a special unknown words model is used. The part of speech of unknown words P(post | wt = unkword) is estimated using a decision tree model. This decision tree is built by splitting letters in words from the end of the word backward to its beginning. A POS tag is assigned to the word after comparisons between the morphological features of words trained from the corpus and the strings concatenated from the tree leaves are made. This method is about as accurate as the approach described by Klein and Manning (2003). Next, a similar model is set up for estimating P(nt | wt = unkword). Most English words have regular plural forms, and even irregular words have their patterns. Therefore, the morphological features of English words can often be used to determine whether a word is singular or plural. Gender is irregular in English, so model-based predictions are problematic. Instead, we follow Bergsma and Lin (2005) to get the distribution of gender from their gender/number data and then predict the gender for unknown words. 4 Evaluation and Discussion 4.1 Experimental Setup In this research, we used the ACE corpus (Phase 2) 1 for evaluation. The development of this corpus involved two stages. The first stage is called EDT (entity detection and tracking) while the second stage is called RDC (relation detection and characterization). All markables have named entity types such as FACILITY, GPE (geopolitical entity), PERSON, LOCATION, ORGANIZATION, PERSON, VEHICLE and WEAPONS, which were annotated in the first stage. In the second stage, relations between 1See http://projects.ldc.upenn.edu/ace/ annotation/previous/ for details on the corpus. named entities were annotated. This corpus include three parts, composed of different genres: newspaper texts (NPAPER), newswire texts (NWIRE) and broadcasted news (BNEWS). Each of these is split into a train part and a devtest part. For the train part, there are 76, 130 and 217 articles in NPAPER, NWIRE and BNEWS respectively while for the test part, there are 17, 29 and 51 articles respectively. Though the number of articles are quite different for three genres, the total number of words are almost the same. Namely, the length of NPAPER is much longer than BNEWS (about 1200 words, 800 word and 500 words respectively for three genres). The longer articles involve longer coreference chains. Following the common practice, we used the devtest material only for testing. Progress during the development phase was estimated only by using cross-validation on the training set for the BNEWS section. In order to make comparisons with publications which used the same corpus, we make efforts to set up identical conditions for our experiments. The main point of comparison is Denis and Baldridge (2007), which was similar in that it described a new type of coreference resolver using simple features. Therefore, similar to their practice, we use all forms of personal and possessive pronouns that were annotated as ACE ”markables”. Namely, pronouns associated with named entity types could be used in this system. In experiments, we also used true ACE mentions as they did. This means that pleonastics and references to eventualities or to non-ACE entities are not included in our experiments either. In all, 7263 referential pronouns in training data set and 1866 in testing data set are found in all three genres. They have results of three different systems: SCC (single candidate classifier), TCC (twin candidate classifier) and RK (ranking). Besides the three and our own system, we also report results of emPronouns, which is an unsupervised system based on a recently published paper (Charniak and Elsner, 2009). We select this unsupervised system for two reasons. Firstly, emPronouns is a publicly available system with high accuracy in pronoun resolution. Secondly, it is necessary for us to demonstrate our system has strong empirical superiority over unsupervised ones. In testing, we also used the OPNLP Named Entity Recognizer to tag the test corpus. 1175 During training, besides coreference annotation itself, the part of speech, dependencies between words and named entities, gender, number and index are extracted using relative frequency estimation to train models for the coreference resolution system. Inputs for testing are the plain text and the trained model files. The entity buffer used in these experiments kept track of only the six most recent mentions. The result of this process is an annotation of the headword of every noun phrase denoting it as a mention. In addition, this system does not do anaphoricity detection, so the antecedent operation for non-anaphora pronoun it is set to be none. Finally, the system does not yet model cataphora, about 10 cataphoric pronouns in the testing data which are all counted as wrong. 4.2 Results The performance was evaluated using the ratio of the number of correctly resolved anaphors over the number of all anaphors as a success metrics. All the standards are consistent with those defined in Charniak and Elsner (2009). During development, several preliminary experiments explored the effects of starting from a simple baseline and adding more features. The BNEWS corpus was employed in these development experiments. The baseline only includes part of speech tags, the index feature and and syntactic roles. Syntactic roles are extracted from the parsing results with Stanford parser. The success rate of this baseline configuration is 0.48. This low accuracy is partially due to the errors of automatic parsing. With gender and number features added, the performance jumped to 0.65. This shows that number and gender agreements play an important role in pronoun anaphora resolution. For a more standard comparison to other work, subsequent tests were performed on the gold standard ACE corpus (using the model as described with named entity features instead of syntactic role features). As shown in Denis and Baldridge (2007), they employ all features we use except syntactic roles. In these experiments, the system got better results as shown in Table 2. The result of the first one is obtained by running the publicly available system emPronouns2. It is a 2the available system in fact only includes the testing part. Thus, it may be unfair to compare emPronouns this way with System BNEWS NPAPER NWIRE emPronouns 58.5 64.5 60.6 SCC 62.2 70.7 68.3 TCC 68.6 74.7 71.1 RK 72.9 76.4 72.4 FHMM 74.9 79.4 74.5 Table 2: Accuracy scores for emPronouns, the singlecandidate classifier (SCC), the twin-candidate classifier (TCC), the ranker and FHMM high-accuracy unsupervised system which reported the best result in Charniak and Elsner (2009). The results of the other three systems are those reported by Denis and Baldridge (2007). As Table 2 shows, the FHMM system gets the highest average results. The emPronouns system got the lowest results partially due to the reason that we only directly run the existing system with its existing model files without retraining. But the gap between its results and results of our system is large. Thus, we may still say that our system probably can do a better job even if we train new models files for emPronouns with ACE corpus. With almost exactly identical settings, why does our FHMM system get the highest average results? The convincing reason is that FHMM is strongly influenced by the sequential dependencies. The ranking approach ranks a set of mentions using a set of features, and it also maintains the discourse model, but it is not processing sequentially. The FHMM system always maintain a set of mentions as well as a first-order dependencies between part of speech and operator. Therefore, context can be more fully taken into consideration. This is the main reason that the FHMM approach achieved better results than the ranking approach. From the result, one point we may notice is that NPAPER usually obtains higher results than both BNEWS and NWIRE for all systems while BNEWS lower than other two genres. In last section, we mention that articles in NPAPER are longer than other genres and also have denser coreference chains while articles in BENEWS are shorter and have sparer chains. Then, it is not hard to understand why results of NPAPER are better while those of other systems. 1176 BNEWS are poorer. In Denis and Baldridge (2007), they also reported new results with a window of 10 sentences for RK model. All three genres obtained higher results than those when with shorter ones. They are 73.0, 77.6 and 75.0 for BNEWS, NPAPER and NWIRE respectively. We can see that except the one for NWIRE, the results are still poorer than our system. For NWIRE, the RK model got 0.5 higher. The average of the RK is 75.2 while that of the FHMM system is 76.3, which is still the best. Since the emPronoun system can output samplelevel results, it is possible to do a paired Student’s t-test. That test shows that the improvement of our system on all three genres is statistically significant (p < 0.001). Unfortunately, the other systems only report overall results so the same comparison was not so straightforward. 4.3 Error Analysis After running the system on these documents, we checked which pronouns fail to catch their antecedents. There are a few general reasons for errors. First, pronouns which have antecedents very far away cannot be caught. Long-distance anaphora resolution may pose a problem since the buffer size cannot be too long considering the complexity of tracking a large number of mentions through time. During development, estimation of an acceptable size was attempted using the training data. It was found that a mention distance of fourteen would account for every case found in this corpus, though most cases fall well short of that distance. Future work will explore optimizations that will allow for larger or variable buffer sizes so that longer distance anaphora can be detected. A second source of error is simple misjudgments when more than one candidate is waiting for selection. A simple case is that the system fails to distinguish plural personal nouns and non-personal nouns if both candidates are plural. This is not a problem for singular pronouns since gender features can tell whether pronouns are personal or not. Plural nouns in English do not have such distinctions, however. Consequently, demands and Israelis have the same probability of being selected as the antecedents for they, all else being equal. If demands is closer to they, demands will be selected as the antecedent. This may lead to the wrong choice if they in fact refers to Israelis. This may require better measures of referent salience than the “least recently used” heuristic currently implemented. Third, these results also show difficulty resolving coordinate noun phrases due to the simplistic representation of noun phrases in the input. Consider this sentence: President Barack Obama and his wife Michelle Obama visited China last week. They had a meeting with President Hu in Beijing. In this example, the pronoun they corefers with the noun phrase President Barack Obama and his wife Michelle Obama. The present model cannot represent both the larger noun phrase and its contained noun phrases. Since the noun phrase is a coordinate one that includes both noun phrases, the model cannot find a head word to represent it. Finally, while the coreference feature annotations of the ACE are valuable for learning feature models, the model training may still give some misleading results. This is brought about by missing features in the training corpus and by the data sparsity. We solved the problem with add-one smoothing and deleted interpolation in training models besides the transformation in the generation order of the observation model. 5 Conclusion and Future Work This paper has presented a pronoun anaphora resolution system based on FHMMs. This generative system incrementally resolves pronoun anaphora with an entity buffer carrying forward mention features. The system performs well and outperforms other available models. This shows that FHMMs and other time-series models may be a valuable model to resolve anaphora. Acknowledgments We would like to thank the authors and maintainers of ranker models and emPronouns. We also would like to thank the three anonymous reviewers. The final version is revised based on their valuable comments. Thanks are extended to Shane Bergsma, who provided us the gender and number data distribution. In addition, Professor Jeanette Gundel and our labmate Stephen Wu also gave us support in paper editing and in theoretical discussion. 1177 References S Bergsma. 2005. Automatic acquisition of gender information for anaphora resolution. page 342353. Springer. Eugene Charniak and Micha Elsner. 2009. Em works for pronoun anaphora resolution. In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL-09), Athens, Greece. Noam Chomsky. 1981. Lectures on government and binding. Foris, Dordercht. H.H. Clark and CJ Sengul. 1979. In search of referents for nouns and pronouns. Memory & Cognition, 7(1):35–41. P. Denis and J. Baldridge. 2007. A ranking approach to pronoun resolution. In Proc. IJCAI. Kevin Duh. 2005. Jointly labeling multiple sequences: a factorial HMM approach. In ACL ’05: Proceedings of the ACL Student Research Workshop, pages 19–24, Ann Arbor, Michigan. Zoubin Ghahramani and Michael I. Jordan. 1997. Factorial hidden markov models. Machine Learning, 29:1– 31. A. Haghighi and D. Klein. 2007. Unsupervised coreference resolution in a nonparametric bayesian model. In Proceedings of the 45th annual meeting on Association for Computational Linguistics, page 848. A. Haghighi and D. Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3Volume 3, pages 1152–1161. Association for Computational Linguistics. A. Haghighi and D. Klein. 2010. Coreference resolution in a modular, entity-centered model. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 385–393. Association for Computational Linguistics. L. Hasler, C. Orasan, and K. Naumann. 2006. NPs for events: Experiments in coreference annotation. In Proceedings of the 5th edition of the International Conference on Language Resources and Evaluation (LREC2006), pages 1167–1172. Citeseer. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430, Sapporo, Japan. X Luo. 2005. On coreference resolution performance metrics. pages 25–32. Association for Computational Linguistics Morristown, NJ, USA. A. McCallum and B. Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In IJCAI Workshop on Information Integration on the Web. Citeseer. David McClosky, Eugene Charniak, and Mark Johnson. 2008. BLLIP North American News Text, Complete. Linguistic Data Consortium. LDC2008T13. T.S. Morton. 2000. Coreference for NLP applications. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 173–180. Association for Computational Linguistics. V. Ng. 2008. Unsupervised models for coreference resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 640– 649. Association for Computational Linguistics. US NIST. 2003. The ACE 2003 Evaluation Plan. US National Institute for Standards and Technology (NIST), Gaithersburg, MD.[online, pages 2003–08. L. Qiu, M.Y. Kan, and T.S. Chua. 2004. A public reference implementation of the rap anaphora resolution algorithm. Arxiv preprint cs/0406031. X. Yang, J. Su, G. Zhou, and C.L. Tan. 2004. Improving pronoun resolution by incorporating coreferential information of candidates. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 127. Association for Computational Linguistics. 1178
2011
117
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1179–1189, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Disentangling Chat with Local Coherence Models Micha Elsner School of Informatics University of Edinburgh [email protected] Eugene Charniak Department of Computer Science Brown University, Providence, RI 02912 [email protected] Abstract We evaluate several popular models of local discourse coherence for domain and task generality by applying them to chat disentanglement. Using experiments on synthetic multiparty conversations, we show that most models transfer well from text to dialogue. Coherence models improve results overall when good parses and topic models are available, and on a constrained task for real chat data. 1 Introduction One property of a well-written document is coherence, the way each sentence ts into its context– sentences should be interpretable in light of what has come before, and in turn make it possible to interpret what comes after. Models of coherence have primarily been used for text-based generation tasks: ordering units of text for multidocument summarization or inserting new text into an existing article. In general, the corpora used consist of informative writing, and the tasks used for evaluation consider different ways of reordering the same set of textual units. But the theoretical concept of coherence goes beyond both this domain and this task setting– and so should coherence models. This paper evaluates a variety of local coherence models on the task of chat disentanglement or “threading”: separating a transcript of a multiparty interaction into independent conversations1. Such simultaneous conversations occur in internet chat 1A public implementation is available via https:// bitbucket.org/melsner/browncoherence. rooms, and on shared voice channels such as pushto-talk radio. In these situations, a single, correctly disentangled, conversational thread will be coherent, since the speakers involved understand the normal rules of discourse, but the transcript as a whole will not be. Thus, a good model of coherence should be able to disentangle sentences as well as order them. There are several differences between disentanglement and the newswire sentence-ordering tasks typically used to evaluate coherence models. Internet chat comes from a different domain, one where topics vary widely and no reliable syntactic annotations are available. The disentanglement task measures different capabilities of a model, since it compares documents that are not permuted versions of one another. Finally, full disentanglement requires a large-scale search, which is computationally difcult. We move toward disentanglement in stages, carrying out a series of experiments to measure the contribution of each of these factors. As an intermediary between newswire and internet chat, we adopt the SWITCHBOARD (SWBD) corpus. SWBD contains recorded telephone conversations with known topics and hand-annotated parse trees; this allows us to control for the performance of our parser and other informational resources. To compare the two algorithmic settings, we use SWBD for ordering experiments, and also articially entangle pairs of telephone dialogues to create synthetic transcripts which we can disentangle. Finally, we present results on actual internet chat corpora. On synthetic SWBD transcripts, local coherence models improve performance considerably over our baseline model, Elsner and Charniak (2008b). On 1179 internet chat, we continue to do better on a constrained disentanglement task, though so far, we are unable to apply these improvements to the full task. We suspect that, with better low-level annotation tools for the chat domain and a good way of integrating prior information, our improvements on SWBD could transfer fully to IRC chat. 2 Related work There is extensive previous work on coherence models for text ordering; we describe several specic models below, in section 2. This study focuses on models of local coherence, which relate text to its immediate context. There has also been work on global coherence, the structure of a document as a whole (Chen et al., 2009; Eisenstein and Barzilay, 2008; Barzilay and Lee, 2004), typically modeled in terms of sequential topics. We avoid using them here, because we do not believe topic sequences are predictable in conversation and because such models tend to be algorithmically cumbersome. In addition to text ordering, local coherence models have also been used to score the uency of texts written by humans or produced by machine (Pitler and Nenkova, 2008; Lapata, 2006; Miltsakaki and Kukich, 2004). Like disentanglement, these tasks provide an algorithmic setting that differs from ordering, and so can demonstrate previously unknown weaknesses in models. However, the target genre is still informative writing, so they reveal little about cross-domain exibility. The task of disentanglement or “threading” for internet chat was introduced by Shen et al. (2006). Elsner and Charniak (2008b) created the publicly available #LINUX corpus; the best published results on this corpus are those of Wang and Oard (2009). These two studies use overlapping unigrams to measure similarity between two sentences; Wang and Oard (2009) use a message expansion technique to incorporate context beyond a single sentence. Unigram overlaps are used to model coherence, but more sophisticated methods using syntax (Lapata and Barzilay, 2005) or lexical features (Lapata, 2003) often outperform them on ordering tasks. This study compares several of these methods with Elsner and Charniak (2008b), which we use as a baseline because there is a publicly available implementation2. Adams (2008) also created and released a disentanglement corpus. They use LDA (Blei et al., 2001) to discover latent topics in their corpus, then measuring similarity by looking for shared topics. These features fail to improve their performance, which is puzzling in light of the success of topic modeling for other coherence and segmentation problems (Eisenstein and Barzilay, 2008; Foltz et al., 1998). The results of this study suggest that topic models can help with disentanglement, but that it is difcult to nd useful topics for IRC chat. A few studies have attempted to disentangle conversational speech (Aoki et al., 2003; Aoki et al., 2006), mostly using temporal features. For the most part, however, this research has focused on auditory processing in the context of the cocktail party problem, the task of attending to a specic speaker in a noisy room (Haykin and Chen, 2005). Utterance content has some inuence on what the listener perceives, but only for extremely salient cues such as the listener's name (Moray, 1959), so cocktail party research does not typically use lexical models. 3 Models In this section, we briey describe the models we intend to evaluate. Most of them are drawn from previous work; one, the topical entity grid, is a novel extension of the entity grid. For the experiments below, we train the models on SWBD, sometimes augmented with a larger set of automatically parsed conversations from the FISHER corpus. Since the two corpora are quite similar, FISHER is a useful source for extra data; McClosky et al. (2010) uses it for this purpose in parsing experiments. (We continue to use SWBD/FISHER even for experiments on IRC, because we do not have enough disentangled training data to learn lexical relationships.) 3.1 Entity grid The entity grid (Lapata and Barzilay, 2005; Barzilay and Lapata, 2005) is an attempt to model some principles of Centering Theory (Grosz et al., 1995) in a statistical manner. It represents a document in terms of entities and their syntactic roles: subject (S), object (O), other (X) and not present (-). In each new 2cs.brown.edu/˜melsner 1180 utterance, the grid predicts the role in which each entity will appear, given its history of roles in the previous sentences, plus a salience feature counting the total number of times the entity occurs. For instance, for an entity which is the subject of sentence 1, the object of sentence 2, and occurs four times in total, the grid predicts its role in sentence 3 according to the conditional P(jS; O; sal = 4). As in previous work, we treat each noun in a document as denoting a single entity, rather than using a coreference technique to attempt to resolve them. In our development experiments, we noticed that coreferent nouns often occur farther apart in conversation than in newswire, since they are frequently referred to by pronouns and deictics in the interim. Therefore, we extend the history to six previous utterances. For robustness with this long history, we model the conditional probabilities using multilabel logistic regression rather than maximum likelihood. This requires the assumption of a linear model, but makes the estimator less vulnerable to overtting due to sparsity, increasing performance by about 2% in development experiments. 3.2 Topical entity grid This model is a variant of the generative entity grid, intended to take into account topical information. To create the topical entity grid, we learn a set of topic-to-word distributions for our corpus using LDA (Blei et al., 2001)3 with 200 latent topics. This model embeds our vocabulary in a lowdimensional space: we represent each word w as the vector of topic probabilities p(tijw). We experimented with several ways to measure relationships between words in this space, starting with the standard cosine. However, the cosine can depend on small variations in probability (for instance, if w has most of its mass in dimension 1, then it is sensitive to the exact weight of v for topic 1, even if this essentially never happens). To control for this tendency, we instead use the magnitude of the dimension of greatest similarity: sim(w; v) = maxi min(wi; vi) To model coherence, we generalize the binary his3www.cs.princeton.edu/˜blei/ topicmodeling.html tory features of the standard entity grid, which detect, for example, whether entity e is the subject of the previous sentence. In the topical entity grid, we instead compute a real-valued feature which sums up the similarity between entity e and the subject(s) of the previous sentence. These features can detect a transition like: “The House voted yesterday. The Senate will consider the bill today.”. If “House” and “Senate” have a high similarity, then the feature will have a high value, predicting that “Senate” is a good subject for the current sentence. As in the previous section, we learn the conditional probabilities with logistic regression; we train in parallel by splitting the data and averaging (Mann et al., 2009). The topics are trained on FISHER, and on NANC for news. 3.3 IBM-1 The IBM translation model was rst considered for coherence by Soricut and Marcu (2006), although a less probabilistically elegant version was proposed earlier (Lapata, 2003). This model attempts to generate the content words of the next sentence by translating them from the words of the previous sentence, plus a null word; thus, it will learn alignments between pairs of words that tend to occur in adjacent sentences. We learn parameters on the FISHER corpus, and on NANC for news. 3.4 Pronouns The use of a generative pronoun resolver for coherence modeling originates in Elsner and Charniak (2008a). That paper used a supervised model (Ge et al., 1998), but we adapt a newer, unsupervised model which they also make publicly available (Charniak and Elsner, 2009)4. They model each pronoun as generated by an antecedent somewhere in the previous two sentences. If a good antecedent is found, the probability of the pronoun's occurrence will be high; otherwise, the probability is low, signaling that the text is less coherent because the pronoun is hard to interpret correctly. We use the model as distributed for news text. For conversation, we adapt it by running a few iterations of their EM training algorithm on the FISHER data. 4bllip.cs.brown.edu/resources.shtml\ #software 1181 3.5 Discourse-newness Building on work from summarization (Nenkova and McKeown, 2003) and coreference resolution (Poesio et al., 2005), Elsner and Charniak (2008a) use a model which recognizes discourse-new versus old NPs as a coherence model. For instance, the model can learn that “President Barack Obama” is a more likely rst reference than “Obama”. Following their work, we score discourse-newness with a maximum-entropy classier using syntactic features counting different types of NP modiers, and we use NP head identity as a proxy for coreference. 3.6 Chat-specic features Most disentanglement models use non-linguistic information alongside lexical features; in fact, timestamps and speaker identities are usually better cues than words are. We capture three essential nonlinguistic features using simple generative models. The rst feature is the time gap between one utterance and the next within the same thread. Consistent short gaps are a sign of normal turn-taking behavior; long pauses do occur, but much more rarely (Aoki et al., 2003). We round all time gaps to the nearest second and model the distribution of time gaps using a histogram, choosing bucket sizes adaptively so that each bucket contains at least four datapoints. The second feature is speaker identity; conversations usually involve a small subset of the total number of speakers, and a few core speakers make most of the utterances. We model the distribution of speakers in each conversation using a Chinese Restaurant Process (CRP) (Aldous, 1985) (tuning the dispersion to maximize development peformance). The CRP's “rich-get-richer” dynamics capture our intuitions, favoring conversations dominated by a few vociferous speakers. Finally, we model name mentioning. Speakers in IRC chat often use their addressee's names to coordinate the chat (O'Neill and Martin, 2003), and this is a powerful source of information (Elsner and Charniak, 2008b). Our model classies each utterance into either the start or continuation of a conversational turn, by checking if the previous utterance had the same speaker. Given this status, it computes probabilities for three outcomes: no name mention, a mention of someone who has previously spoken in the conversation, or a mention of someone else. (The third option is extremely rare; this accounts for most of the model's predictive power). We learn these probabilities from IRC training data. 3.7 Model combination To combine these different models, we adopt the log-linear framework of Soricut and Marcu (2006). Here, each model Pi is assigned a weight i, and the combined score P(d) is proportional to: X i ilog(Pi(d)) The weights  can be learned discriminatively, maximizing the probability of d relative to a taskspecic contrast set. For ordering experiments, the contrast set is a single random permutation of d; we explain the training regime for disentanglement below, in subsection 4.1. 4 Comparing orderings of SWBD To measure the differences in performance caused by moving from news to a conversational domain, we rst compare our models on an ordering task, discrimination (Barzilay and Lapata, 2005; Karamanis et al., 2004). In this task, we take an original document and randomly permute its sentences, creating an articial incoherent document. We then test to see if our model prefers the coherent original. For SWBD, rather than compare permutations of the individual utterances, we permute conversational turns (sets of consecutive utterances by each speaker), since turns are natural discourse units in conversation. We take documents numbered 2000– 3999 as training/development and the remainder as test, yielding 505 training and 153 test documents; we evaluate 20 permutations per document. As a comparison, we also show results for the same models on WSJ, using the train-test split from Elsner and Charniak (2008a); the test set is sections 14-24, totalling 1004 documents. Purandare and Litman (2008) carry out similar experiments on distinguishing permuted SWBD documents, using lexical and WordNet features in a model similar to Lapata (2003). Their accuracy for this task (which they call “switch-hard”) is roughly 68%. 1182 WSJ SWBD EGrid 76.4z 86.0 Topical EGrid 71.8z 70.9z IBM-1 77.2z 84.9y Pronouns 69.6z 71.7z Disc-new 72.3z 55.0z Combined 81.9 88.4 -EGrid 81.0 87.5 -Topical EGrid 82.2 90.5 -IBM-1 79.0z 88.9 -Pronouns 81.3 88.5 -Disc-new 82.2 88.4 Table 1: Discrimination F scores on news and dialogue. z indicates a signicant difference from the combined model at p=.01 and y at p=.05. In Table 1, we show the results for individual models, for the combined model, and ablation results for mixtures without each component. WSJ is more difcult than SWBD overall because, on average, news articles are shorter than SWBD conversations. Short documents are harder, because permuting disrupts them less. The best SWBD result is 91%; the best WSJ result is 82% (both for mixtures without the topical entity grid). The WSJ result is state-of-the-art for the dataset, improving slightly on Elsner and Charniak (2008a) at 81%. We test results for signicance using the non-parametric Mann-Whitney U test. Controlling for the fact that discrimination is easier on SWBD, most of the individual models perform similarly in both corpora. The strongest models in both cases are the entity grid and IBM-1 (at about 77% for news, 85% for dialogue). Pronouns and the topical entity grid are weaker. The major outlier is the discourse-new model, whose performance drops from 72% for news to only 55%, just above chance, for conversation. The model combination results show that all the models are quite closely correlated, since leaving out any single model does not degrade the combination very much (only one of the ablations is signicantly worse than the combination). The most critical in news is IBM-1 (decreasing performance by 3% when removed); in conversation, it is the entity grid (decreasing by about 1%). The topical entity grid actually has a (nonsignicant) negative impact on combined performance, implying that its predictive power in this setting comes mainly from information that other models also capture, but that it is noisier and less reliable. In each domain, the combined models outperform the best single model, showing the information provided by the weaker models is not completely redundant. Overall, these results suggest that most previously proposed local coherence models are domaingeneral; they work on conversation as well as news. The exception is the discourse-newness model, which benets most from the specic conventions of a written style. Full names with titles (like “President Barack Obama”) are more common in news, while conversation tends to involve fewer completely unfamiliar entities and more cases of bridging reference, in which grounding information is given implicitly (Nissim, 2006). Due to its poor performance, we omit the discourse-newness model in our remaining experiments. 5 Disentangling SWBD We now turn to the task of disentanglement, testing whether models that are good at ordering also do well in this new setting. We would like to hold the domain constant, but we do not have any disentanglement data recorded from naturally occurring speech, so we create synthetic instances by merging pairs of SWBD dialogues. Doing so creates an articial transcript in which two pairs of people appear to be talking simultaneously over a shared channel. The situation is somewhat contrived in that each pair of speakers converses only with each other, never breaking into the other pair's dialogue and rarely using devices like name mentioning to make it clear who they are addressing. Since this makes speaker identity a perfect cue for disentanglement, we do not use it in this section. The only chatspecic model we use is time. Because we are not using speaker information, we remove all utterances which do not contain a noun before constructing synthetic transcripts– these are mostly backchannels like “Yeah”. Such utterances cannot be correctly assigned by our coherence models, which deal with content; we suspect most of them could be dealt with by associating them with the nearest utterance from the same speaker. 1183 Once the backchannels are stripped, we can create a synthetic transcript. For each dialogue, we rst simulate timestamps by sampling the number of seconds between each utterance and the next from a discretized Gaussian: bN(0; 2:5)c. The interleaving of the conversations is dictated by the timestamps. We truncate the longer conversation at the length of the shorter; this ensures a baseline score of 50% for the degenerate model that assigns all utterances to the same conversation. We create synthetic instances of two types– those where the two entangled conversations had different topical prompts and those where they were the same. (Each dialogue in SWBD focuses on a preselected topic, such as shing or movies.) We entangle dialogues from our ordering development set to use for mixture training and validation; for testing, we use 100 instances of each type, constructed from dialogues in our test set. When disentangling, we treat each thread as independent of the others. In other words, the probability of the entire transcript is the product of the probabilities of the component threads. Our objective is to nd the set of threads maximizing this. As a comparison, we use the model of Elsner and Charniak (2008b) as a baseline. To make their implementation comparable to ours, in this section we constrain it to nd only two threads. 5.1 Disentangling a single utterance Our rst disentanglement task is to correctly assign a single utterance, given the true structure of the rest of the transcript. For each utterance, we compare two versions of the transcript, the original, and a version where it is swapped into the other thread. Our accuracy measures how often our models prefer the original. Unlike full-scale disentanglement, this task does not require a computationally demanding search, so it is possible to run experiments quickly. We also use it to train our mixture models for disentanglement, by construct a training example for each utterance i in our training transcripts. Since the Elsner and Charniak (2008b) model maximizes a correlation clustering objective which sums up independent edge weights, we can also use it to disentangle a single sentence efciently. Our results are shown in Table 2. Again, results for individual models are above the line, then Different Same Avg. EGrid 80.2 72.9 76.6 Topical EGrid 81.7 73.3 77.5 IBM-1 70.4 66.7 68.5 Pronouns 53.1 50.1 51.6 Time 58.5 57.4 57.9 Combined 86.8 79.6 83.2 -EGrid 86.0 79.1 82.6 -Topical EGrid 85.2 78.7 81.9 -IBM-1 86.2 78.7 82.4 -Pronouns 86.8 79.4 83.1 -Time 84.5 76.7 80.6 E+C `08 78.2 73.5 75.8 Table 2: Average accuracy for disentanglement of a single utterance on 200 synthetic multiparty conversations from SWBD test. our combined model, and nally ablation results for mixtures omitting a single model. The results show that, for a pair of dialogues that differ in topic, our best model can assign a single sentence with 87% accuracy. For the same topic, the accuracy is 80%. In each case, these results improve on (Elsner and Charniak, 2008b), which scores 78% and 74%. Changing to this new task has a substantial impact on performance. The topical model, which performed poorly for ordering, is actually stronger than the entity grid in this setting. IBM-1 underperforms either grid model (69% to 77%); on ordering, it was nearly as good (85% to 86%). Despite their ordering performance of 72%, pronouns are essentially useless for this task, at 52%. This decline is due partly to domain, and partly to task setting. Although SWBD contains more pronominals than WSJ, many of them are rst and second-person pronouns or deictics, which our model does not attempt to resolve. Since the disentanglement task involves moving only a single sentence, if moving this sentence does not sever a resolvable pronoun from its antecedent, the model will be unable to make a good decision. As before, the ablation results show that all the models are quite correlated, since removing any single model from the mixture causes only a small decrease in performance. The largest drop (83% to 81%) is caused by removing time; though time is a weak model on its own, it is completely orthogo1184 nal to the other models, since unlike them, it does not depend on the words in the sentences. Comparing results between “different topic” and “same topic” instances shows that “same topic” is harder– by about 7% for the combined model. The IBM model has a relatively small gap of 3.7%, and in the ablation results, removing it causes a larger drop in performance for “same” than “different”; this suggests it is somewhat more robust to similarity in topic than entity grids. Disentanglement accuracy is hard to predict given ordering performance; the two tasks plainly make different demands on models. One difference is that the models which use longer histories (the two entity grids) remain strong, while the models considering only one or two previous sentences (IBM and pronouns) do not do as well. Since the changes being considered here affect only a single sentence, while permutation affects the entire transcript, more history may help by making the model more sensitive to small changes. 5.2 Disentangling an entire transcript We now turn to the task of disentangling an entire transcript at once. This is a practical task, motivated by applications such as search and information retrieval. However, it is more difcult than assigning only a single utterance, because decisions are interrelated– an error on one utterance may cause a cascade of poor decisions further down. It is also computationally harder. We use tabu search (Glover and Laguna, 1997) to nd a good solution. The search repeatedly nds and moves the utterance which would most improve the model score if swapped from one thread to the other. Unlike greedy search, tabu search is constrained not to repeat a solution that it has recently visited; this forces it to keep exploring when it reaches a local maximum. We run 500 iterations of tabu search (usually nding the rst local maximum after about 100) and return the best solution found. We measure performance with one-to-one overlap, which maps the two clusters to the two gold dialogues, then measures percent correct5. Our results (Table 3) show that, for transcripts with different topics, our disentanglement has 68% over5The other popular metrics, F and loc 3, are correlated. Different Same Avg. EGrid 60.3 57.1 58.7 Topical EGrid 62.3 56.8 59.6 IBM-1 56.5 55.2 55.9 Pronouns 54.5 54.4 54.4 Time 55.4 53.8 54.6 Combined 67.9 59.8 63.9 E+C `08 59.1 57.4 58.3 Table 3: One-to-one overlap between disentanglement results and truth on 200 synthetic multiparty conversations from SWBD test. lap with truth, extracting about two thirds of the structure correctly; this is substantially better than Elsner and Charniak (2008b), which scores 59%. Where the entangled conversations have the same topic, performance is lower, about 60%, but still better than the comparison model with 57%. Since correlations with the previous section are fairly reliable, and the disentanglement procedure is computationally intensive, we omit ablation experiments. As we expect, full disentanglement is more difcult than single-sentence disentanglement (combined scores drop by about 20%), but the singlesentence task is a good predictor of relative performance. Entity grid models do best, the IBM model remains useful, but less so than for discrimination, and pronouns are very weak. The IBM model performs similarly under both metrics (56% and 57%), while other models perform worse on loc 3. This supports our suggestion that IBM's decline in performance from ordering is indeed due to its using a single sentence history; it is still capable of getting local structures right, but misses global ones. 6 IRC data In this section, we move from synthetic data to real multiparty discourse recorded from internet chat rooms. We use two datasets: the #LINUX corpus (Elsner and Charniak, 2008b), and three larger corpora, #IPHONE, #PHYSICS and #PYTHON (Adams, 2008). We use the 1000-line “development” section of #LINUX for tuning our mixture models and the 800-line “test” section for development experiments. We reserve the Adams (2008) corpora for testing; together, they consist of 19581 lines of chat, with each section containing 500 to 1000 lines. 1185 Chat-specic 74.0 +EGrid 79.3 +Topical EGrid 76.8 +IBM-1 76.3 +Pronouns 73.9 +EGrid/Topic/IBM-1 78.3 E+C `08b 76.4 Table 4: Accuracy for single utterance disentanglement, averaged over annotations of 800 lines of #LINUX data. In order to use syntactic models like the entity grid, we parse the transcripts using (McClosky et al., 2006). Performance is bad, although the parser does identify most of the NPs; poor results are typical for a standard parser on chat (Foster, 2010). We postprocess the parse trees to retag “lol”, “haha” and “yes” as UH (rather than NN, NNP and JJ). In this section, we use all three of our chatspecic models (sec. 2.0.6; time, speaker and mention) as a baseline. This baseline is relatively strong, so we evaluate our other models in combination with it. 6.1 Disentangling a single sentence As before, we show results on correctly disentangling a single sentence, given the correct structure of the rest of the transcript. We average performance on each transcript over the different annotations, then average the transcripts, weighing them by length to give each utterance equal weight. Table 4 gives results on our development corpus, #LINUX. Our best result, for the chat-specic features plus entity grid, is 79%, improving on the comparison model, Elsner and Charniak (2008b), which gets 76%. (Although the table only presents an average over all annotations of the dataset, this model is also more accurate for each individual annotator than the comparison model.) We then ran the same model, chat-specic features plus entity grid, on the test corpora from Adams (2008). These results (Table 5) are also better than Elsner and Charniak (2008b), at an average of 93% over 89%. As pointed out in Elsner and Charniak (2008b), the chat-specic features are quite powerful in this domain, and it is hard to improve over them. Elsner and Charniak (2008b), which has simple lexical features, mostly based on unigram overlap, increases #IPHONE #PHYSICS #PYTHON +EGrid 92.3 96.6 91.1 E+C `08b 89.0 90.2 88.4 Table 5: Average accuracy for disentanglement of a single utterance for 19581 total lines from Adams (2008). performance over baseline by 2%. Both IBM and the topical entity grid achieve similar gains. The entity grid does better, increasing performance to 79%. Pronouns, as before for SWBD, are useless. We believe that the entity grid's good performance here is due mostly to two factors: its use of a long history, and its lack of lexicalization. The grid looks at the previous six sentences, which differentiates it from the IBM model and from Elsner and Charniak (2008b), which treats each pair of sentences independently. Using this long history helps to distinguish important nouns from unimportant ones better than frequency alone. We suspect that our lexicalized models, IBM and the topical entity grid, are hampered by poor parameter settings, since their parameters were learned on FISHER rather than IRC chat. In particular, we believe this explains why the topical entity grid, which slightly outperformed the entity grid on SWBD, is much worse here. 6.2 Full disentanglement Running our tabu search algorithm on the full disentanglement task yields disappointing results. Accuracies on the #LINUX dataset are not only worse than previous work, but also worse than simple baselines like creating one thread for each speaker. The model nds far too many threads– it detects over 300, when the true number is about 81 (averaging over annotations). This appears to be related to biases in our chat-specic models as well as in the entity grid; the time model (which generates gaps between adjacent sentences) and the speaker model (which uses a CRP) both assign probability 1 to single-utterance conversations. The entity grid also has a bias toward short conversations, because unseen entities are empirically more likely to occur toward the beginning of a conversation than in the middle. A major weakness in our model is that we aim only to maximize coherence of the individual conversations, with no prior on the likely length or number of conversations that will appear in the tran1186 script. This allows the model to create far too many conversations. Integrating a prior into our framework is not straightforward because we currently train our mixture to maximize single-utterance disentanglement performance, and the prior is not useful for this task. We experimented with xing parts of the transcript to the solution obtained by Elsner and Charniak (2008b), then using tabu search to ll in the gaps. This constrains the number of conversations and their approximate positions. With this structure in place, we were able to obtain scores comparable to Elsner and Charniak (2008b), but not improvements. It appears that our performance increase on single-sentence disentanglement does not transfer to this task because of cascading errors and the necessity of using external constraints. 7 Conclusions We demonstrate that several popular models of local coherence transfer well to the conversational domain, suggesting that they do indeed capture coherence in general rather than specic conventions of newswire text. However, their performance across tasks is not as stable; in particular, models which use less history information are worse for disentanglement. Our results study suggest that while sophisticated coherence models can potentially contribute to disentanglement, they would benet greatly from improved low-level resources for internet chat. Better parsing, or at least NP chunking, would help for models like the entity grid which rely on syntactic role information. Larger training sets, or some kind of transfer learning, could improve the learning of topics and other lexical parameters. In particular, our results on SWBD data conrm the conjecture of (Adams, 2008) that LDA topic modeling is in principle a useful tool for disentanglement– we believe a topic-based model could also work on IRC chat, but would require a better set of extracted topics. With better parameters for these models and the integration of a prior, we believe that our good performance on SWBD and single-utterance disentanglement for IRC can be extended to full-scale disentanglement of IRC. Acknowledgements We are extremely grateful to Regina Barzilay, Mark Johnson, Rebecca Mason, Ben Swanson and Neal Fox for their comments, to Craig Martell for the NPS chat datasets and to three anonymous reviewers. This work was funded by a Google Fellowship for Natural Language Processing. References Paige H. Adams. 2008. Conversation Thread Extraction and Topic Detection in Text-based Chat. Ph.D. thesis, Naval Postgraduate School. David Aldous. 1985. Exchangeability and related topics. In Ecole d'Ete de Probabilities de Saint-Flour XIII 1983, pages 1–198. Springer. Paul M. Aoki, Matthew Romaine, Margaret H. Szymanski, James D. Thornton, Daniel Wilson, and Allison Woodruff. 2003. The mad hatter's cocktail party: a social mobile audio space supporting multiple simultaneous conversations. In CHI '03: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 425–432, New York, NY, USA. ACM Press. Paul M. Aoki, Margaret H. Szymanski, Luke D. Plurkowski, James D. Thornton, Allison Woodruff, and Weilie Yi. 2006. Where's the “party” in “multiparty”?: analyzing the structure of small-group sociable talk. In CSCW '06: Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work, pages 393–402, New York, NY, USA. ACM Press. Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: an entity-based approach. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In HLT-NAACL 2004: Proceedings of the Main Conference, pages 113–120. David Blei, Andrew Y. Ng, and Michael I. Jordan. 2001. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:2003. Eugene Charniak and Micha Elsner. 2009. EM works for pronoun anaphora resolution. In Proceedings of EACL, Athens, Greece. Harr Chen, S.R.K. Branavan, Regina Barzilay, and David R. Karger. 2009. Global models of document structure using latent permutations. In Proceedings of Human Language Technologies: The 2009 Annual 1187 Conference of the North American Chapter of the Association for Computational Linguistics, pages 371– 379, Boulder, Colorado, June. Association for Computational Linguistics. Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In EMNLP, pages 334–343. Micha Elsner and Eugene Charniak. 2008a. Coreference-inspired coherence modeling. In Proceedings of ACL-08: HLT, Short Papers, pages 41–44, Columbus, Ohio, June. Association for Computational Linguistics. Micha Elsner and Eugene Charniak. 2008b. You talking to me? a corpus and algorithm for conversation disentanglement. In Proceedings of ACL-08: HLT, pages 834–842, Columbus, Ohio, June. Association for Computational Linguistics. Peter Foltz, Walter Kintsch, and Thomas Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse Processes, 25(2&3):285–307. Jennifer Foster. 2010. “cba to check the spelling”: Investigating parser performance on discussion forum posts. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 381–384, Los Angeles, California, June. Association for Computational Linguistics. Niyu Ge, John Hale, and Eugene Charniak. 1998. A statistical approach to anaphora resolution. In Proceedings of the Sixth Workshop on Very Large Corpora, pages 161–171, Orlando, Florida. Harcourt Brace. Fred Glover and Manuel Laguna. 1997. Tabu Search. University of Colorado at Boulder. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203–225. Simon Haykin and Zhe Chen. 2005. The Cocktail Party Problem. Neural Computation, 17(9):1875–1902. Nikiforos Karamanis, Massimo Poesio, Chris Mellish, and Jon Oberlander. 2004. Evaluating centeringbased metrics of coherence. In ACL, pages 391–398. Mirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representations. In IJCAI, pages 1085–1090. Mirella Lapata. 2003. Probabilistic text structuring: Experiments with sentence ordering. In Proceedings of the annual meeting of ACL, 2003. Mirella Lapata. 2006. Automatic evaluation of information ordering: Kendall's tau. Computational Linguistics, 32(4):1–14. Gideon Mann, Ryan McDonald, Mehryar Mohri, Nathan Silberman, and Dan Walker. 2009. Efcient largescale distributed training of conditional maximum entropy models. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1231–1239. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 152–159. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 28–36, Los Angeles, California, June. Association for Computational Linguistics. Eleni Miltsakaki and K. Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Nat. Lang. Eng., 10(1):25–55. Neville Moray. 1959. Attention in dichotic listening: Affective cues and the inuence of instructions. Quarterly Journal of Experimental Psychology, 11(1):56– 60. Ani Nenkova and Kathleen McKeown. 2003. References to named entities: a corpus study. In NAACL '03, pages 70–72. Malvina Nissim. 2006. Learning information status of discourse entities. In Proceedings of EMNLP, pages 94–102, Morristown, NJ, USA. Association for Computational Linguistics. Jacki O'Neill and David Martin. 2003. Text chat in action. In GROUP '03: Proceedings of the 2003 international ACM SIGGROUP conference on Supporting group work, pages 40–49, New York, NY, USA. ACM Press. Emily Pitler and Ani Nenkova. 2008. Revisiting readability: A unied framework for predicting text quality. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 186–195, Honolulu, Hawaii, October. Association for Computational Linguistics. Massimo Poesio, Mijail Alexandrov-Kabadjov, Renata Vieira, Rodrigo Goulart, and Olga Uryupina. 2005. Does discourse-new detection help denite description resolution? In Proceedings of the Sixth International Workshop on Computational Semantics, Tillburg. Amruta Purandare and Diane J. Litman. 2008. Analyzing dialog coherence using transition patterns in lexical and semantic features. In FLAIRS Conference'08, pages 195–200. Dou Shen, Qiang Yang, Jian-Tao Sun, and Zheng Chen. 2006. Thread detection in dynamic text message 1188 streams. In SIGIR '06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 35–42, New York, NY, USA. ACM. Radu Soricut and Daniel Marcu. 2006. Discourse generation using utility-trained coherence models. In Proceedings of the Association for Computational Linguistics Conference (ACL-2006). Lidan Wang and Douglas W. Oard. 2009. Context-based message expansion for disentanglement of interleaved text conversations. In Proceedings of NAACL-09. 1189
2011
118
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1190–1199, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics An Affect-Enriched Dialogue Act Classification Model for Task-Oriented Dialogue Kristy Elizabeth Boyer Joseph F. Grafsgaard Eun Young Ha Robert Phillips* James C. Lester Department of Computer Science North Carolina State University Raleigh, NC, USA * Dual Affiliation with Applied Research Associates, Inc. Raleigh, NC, USA {keboyer, jfgrafsg, eha, rphilli, lester}@ncsu.edu Abstract Dialogue act classification is a central challenge for dialogue systems. Although the importance of emotion in human dialogue is widely recognized, most dialogue act classification models make limited or no use of affective channels in dialogue act classification. This paper presents a novel affect-enriched dialogue act classifier for task-oriented dialogue that models facial expressions of users, in particular, facial expressions related to confusion. The findings indicate that the affectenriched classifiers perform significantly better for distinguishing user requests for feedback and grounding dialogue acts within textual dialogue. The results point to ways in which dialogue systems can effectively leverage affective channels to improve dialogue act classification. 1 Introduction Dialogue systems aim to engage users in rich, adaptive natural language conversation. For these systems, understanding the role of a user’s utterance in the broader context of the dialogue is a key challenge (Sridhar, Bangalore, & Narayanan, 2009). Central to this endeavor is dialogue act classification, which categorizes the intention behind the user’s move (e.g., asking a question, providing declarative information). Automatic dialogue act classification has been the focus of a large body of research, and a variety of approaches, including sequential models (Stolcke et al., 2000), vector-based models (Sridhar, Bangalore, & Narayanan, 2009), and most recently, featureenhanced latent semantic analysis (Di Eugenio, Xie, & Serafin, 2010), have shown promise. These models may be further improved by leveraging regularities of the dialogue from both linguistic and extra-linguistic sources. Users’ expressions of emotion are one such source. Human interaction has long been understood to include rich phenomena consisting of verbal and nonverbal cues, with facial expressions playing a vital role (Knapp & Hall, 2006; McNeill, 1992; Mehrabian, 2007; Russell, Bachorowski, & Fernandez-Dols, 2003; Schmidt & Cohn, 2001). While the importance of emotional expressions in dialogue is widely recognized, the majority of dialogue act classification projects have focused either peripherally (or not at all) on emotion, such as by leveraging acoustic and prosodic features of spoken utterances to aid in online dialogue act classification (Sridhar, Bangalore, & Narayanan, 2009). Other research on emotion in dialogue has involved detecting affect and adapting to it within a dialogue system (Forbes-Riley, Rotaru, Litman, & Tetreault, 2009; López-Cózar, Silovsky, & Griol, 2010), but this work has not explored leveraging affect information for automatic user dialogue act classification. Outside of dialogue, sentiment analysis within discourse is an active area of research (López-Cózar et al., 2010), but it is generally lim1190 ited to modeling textual features and not multimodal expressions of emotion such as facial actions. Such multimodal expressions have only just begun to be explored within corpus-based dialogue research (Calvo & D'Mello, 2010; Cavicchio, 2009). This paper presents a novel affect-enriched dialogue act classification approach that leverages knowledge of users’ facial expressions during computer-mediated textual human-human dialogue. Intuitively, the user’s affective state is a promising source of information that may help to distinguish between particular dialogue acts (e.g., a confused user may be more likely to ask a question). We focus specifically on occurrences of students’ confusion-related facial actions during taskoriented tutorial dialogue. Confusion was selected as the focus of this work for several reasons. First, confusion is known to be prevalent within tutoring, and its implications for student learning are thought to run deep (Graesser, Lu, Olde, Cooper-Pye, & Whitten, 2005). Second, while identifying the “ground truth” of emotion based on any external display by a user presents challenges, prior research has demonstrated a correlation between particular facial action units and confusion during learning (Craig, D'Mello, Witherspoon, Sullins, & Graesser, 2004; D'Mello, Craig, Sullins, & Graesser, 2006; McDaniel et al., 2007). Finally, automatic facial action recognition technologies are developing rapidly, and confusion-related facial action events are among those that can be reliably recognized automatically (Bartlett et al., 2006; Cohn, Reed, Ambadar, Xiao, & Moriyama, 2004; Pantic & Bartlett, 2007; Zeng, Pantic, Roisman, & Huang, 2009). This promising development bodes well for the feasibility of automatic real-time confusion detection within dialogue systems. 2 Background and Related Work 2.1 Dialogue Act Classification Because of the importance of dialogue act classification within dialogue systems, it has been an active area of research for some time. Early work on automatic dialogue act classification modeled discourse structure with hidden Markov models, experimenting with lexical and prosodic features, and applying the dialogue act model as a constraint to aid in automatic speech recognition (Stolcke et al., 2000). In contrast to this sequential modeling approach, which is best suited to offline processing, recent work has explored how lexical, syntactic, and prosodic features perform for online dialogue act tagging (when only partial dialogue sequences are available) within a maximum entropy framework (Sridhar, Bangalore, & Narayanan, 2009). A recently proposed alternative approach involves treating dialogue utterances as documents within a latent semantic analysis framework, and applying feature enhancements that incorporate such information as speaker and utterance duration (Di Eugenio et al., 2010). Of the approaches noted above, the modeling framework presented in this paper is most similar to the vector-based maximum entropy approach of Sridhar et al. (2009). However, it takes a step beyond the previous work by including multimodal affective displays, specifically facial expressions, as features available to an affect-enriched dialogue act classification model. 2.2 Detecting Emotions in Dialogue Detecting emotional states during spoken dialogue is an active area of research, much of which focuses on detecting frustration so that a user can be automatically transferred to a human dialogue agent (López-Cózar et al., 2010). Research on spoken dialogue has leveraged lexical features along with discourse cues and acoustic information to classify user emotion, sometimes at a coarse grain along a positive/negative axis (Lee & Narayanan, 2005). Recent work on an affective companion agent has examined user emotion classification within conversational speech (Cavazza et al., 2010). In contrast to that spoken dialogue research, the work in this paper is situated within textual dialogue, a widely used modality of communication for which a deeper understanding of user affect may substantially improve system performance. While many projects have focused on linguistic cues, recent work has begun to explore numerous channels for affect detection including facial actions, electrocardiograms, skin conductance, and posture sensors (Calvo & D'Mello, 2010). A recent project in a map task domain investigates some of these sources of affect data within task-oriented dialogue (Cavicchio, 2009). Like that work, the current project utilizes facial action tagging, for 1191 which promising automatic technologies exist (Bartlett et al., 2006; Pantic & Bartlett, 2007; Zeng, Pantic, Roisman, & Huang, 2009). However, we leverage the recognized expressions of emotion for the task of dialogue act classification. 2.3 Categorizing Emotions within Dialogue and Discourse Sets of emotion taxonomies for discourse and dialogue are often application-specific, for example, focusing on the frustration of users who are interacting with a spoken dialogue system (LópezCózar et al., 2010), or on uncertainty expressed by students while interacting with a tutor (ForbesRiley, Rotaru, Litman, & Tetreault, 2007). In contrast, the most widely utilized emotion frameworks are not application-specific; for example, Ekman’s Facial Action Coding System (FACS) has been widely used as a rigorous technique for coding facial movements based on human facial anatomy (Ekman & Friesen, 1978). Within this framework, facial movements are categorized into facial action units, which represent discrete movements of muscle groups. Additionally, facial action descriptors (for movements not derived from facial muscles) and movement and visibility codes are included. Ekman’s basic emotions (Ekman, 1999) have been used in recent work on classifying emotion expressed within blog text (Das & Bandyopadhyay, 2009), while other recent work (Nguyen, 2010) utilizes Russell’s core affect model (Russell, 2003) for a similar task. During tutorial dialogue, students may not frequently experience Ekman’s basic emotions of happiness, sadness, anger, fear, surprise, and disgust. Instead, students appear to more frequently experience cognitive-affective states such as flow and confusion (Calvo & D'Mello, 2010). Our work leverages Ekman’s facial tagging scheme to identify a particular facial action unit, Action Unit 4 (AU4), that has been observed to correlate with confusion (Craig, D'Mello, Witherspoon, Sullins, & Graesser, 2004; D'Mello, Craig, Sullins, & Graesser, 2006; McDaniel et al., 2007). 2.4 Importance of Confusion in Tutorial Dialogue Among the affective states that students experience during tutorial dialogue, confusion is prevalent, and its implications for student learning are significant. Confusion is associated with cognitive disequilibrium, a state in which students’ existing knowledge is inconsistent with a novel learning experience (Graesser, Lu, Olde, Cooper-Pye, & Whitten, 2005). Students may express such confusion within dialogue as uncertainty, to which human tutors often adapt in a context-dependent fashion (Forbes-Riley et al., 2007). Moreover, implementing adaptations to student uncertainty within a dialogue system can improve the effectiveness of the system (Forbes-Riley et al., 2009). For tutorial dialogue, the importance of understanding student utterances is paramount for a system to positively impact student learning (Dzikovska, Moore, Steinhauser, & Campbell, 2010). The importance of frustration as a cognitive-affective state during learning suggests that the presence of student confusion may serve as a useful constraining feature for dialogue act classification of student utterances. This paper explores the use of facial expression features in this way. 3 Task-Oriented Dialogue Corpus The corpus was collected during a textual humanhuman tutorial dialogue study in the domain of introductory computer science (Boyer, Phillips, et al., 2010). Students solved an introductory computer programming problem and carried on textual dialogue with tutors, who viewed a synchronized version of the students’ problem-solving workspace. The original corpus consists of 48 dialogues, one per student. Each student interacted with one of two tutors. Facial videos of students were collected using built-in webcams, but were not shown to the tutors. Video quality was ranked based on factors such as obscured foreheads due to hats or hair, and improper camera position resulting in students’ faces not being fully captured on the video. The highest-quality set contained 14 videos, and these videos were used in this analysis. They have a total running time of 11 hours and 55 minutes, and include dialogues with three female subjects and eleven male subjects. 3.1 Dialogue act annotation The dialogue act annotation scheme (Table 1) was applied manually. The kappa statistic for interannotator agreement on a 10% subset of the corpus was κ=0.80, indicating good reliability. 1192 Table 1. Dialogue act tags and relative frequencies across fourteen dialogues in video corpus Student Dialogue Act Example Rel. Freq. EXTRA-DOMAIN (EX) Little sleep deprived today .08 GROUNDING (G) Ok or Thanks .21 NEGATIVE FEEDBACK WITH ELABORATION (NE) I’m still confused on what this next for loop is doing. .02 NEGATIVE FEEDBACK (N) I don’t see the diff. .04 POSITIVE FEEDBACK WITH ELABORATION (PE) It makes sense now that you explained it, but I never used an else if in any of my other programs .04 POSITIVE FEEDBACK (P) Second part complete. .11 QUESTION (Q) Why couldn’t I have said if (i<5) .11 STATEMENT (S) i is my only index .07 REQUEST FOR FEEDBACK (RF) So I need to create a new method that sees how many elements are in my array? .16 RESPONSE (RSP) You mean not the length but the contents .14 UNCERTAIN FEEDBACK WITH ELABORATION (UE) I’m trying to remember how to copy arrays .008 UNCERTAIN FEEDBACK (U) Not quite yet .008 3.2 Task action annotation The tutoring sessions were task-oriented, focusing on a computer programming exercise. The task had several subtasks consisting of programming modules to be implemented by the student. Each of those subtasks also had numerous fine-grained goals, and student task actions either contributed or did not contribute to the goals. Therefore, to obtain a rich representation of the task, a manual annotation along two dimensions was conducted (Boyer, Phillips, et al., 2010). First, the subtask structure was annotated hierarchically, and then each task action was labeled for correctness according to the requirements of the assignment. Inter-annotator agreement was computed on 20% of the corpus at the leaves of the subtask tagging scheme, and resulted in a simple kappa of κ=.56. However, the leaves of the annotation scheme feature an implicit ordering (subtasks were completed in order, and adjacent subtasks are semantically more similar than subtasks at a greater distance); therefore, a weighted kappa is also meaningful to consider for this annotation. The weighted kappa is κweighted=.80. An annotated excerpt of the corpus is displayed in Table 2. Table 2. Excerpt from corpus illustrating annotations and interplay between dialogue and task 13:38:09 Student: How do I know where to end? [RF] 13:38:26 Tutor: Well you told me how to get how many elements in an array by using .length right? 13:38:26 Student: [Task action: Subtask 1-a-iv, Buggy] 13:38:56 Tutor: Great 13:38:56 Student: [Task action: Subtask 1-a-v, Correct] 13:39:35 Student: Well is it "array.length"? [RF] **Facial Expression: AU4 13:39:46 Tutor: You just need to use the correct array name 13:39:46 Student: [Task action: Subtask 1-a-iv, Buggy] 3.3 Lexical and Syntactic Features In addition to the manually annotated dialogue and task features described above, syntactic features of each utterance were automatically extracted using the Stanford Parser (De Marneffe et al., 2006). From the phrase structure trees, we extracted the top-most syntactic node and its first two children. In the case where an utterance consisted of more than one sentence, only the phrase structure tree of the first sentence was considered. Individual word tokens in the utterances were further processed with the Porter Stemmer (Porter, 1980) in the NLTK package (Loper & Bird, 2004). Our prior work has shown that these lexical and syntactic features are highly predictive of dialogue acts during task-oriented tutorial dialogue (Boyer, Ha et al. 2010). 1193 4 Facial Action Tagging An annotator who was certified in the Facial Action Coding System (FACS) (Ekman, Friesen, & Hager, 2002) tagged the video corpus consisting of fourteen dialogues. The FACS certification process requires annotators to pass a test designed to analyze their agreement with reference coders on a set of spontaneous facial expressions (Ekman & Rosenberg, 2005). This annotator viewed the videos continuously and paused the playback whenever notable facial displays of Action Unit 4 (AU4: Brow Lowerer) were seen. This action unit was chosen for this study based on its correlations with confusion in prior research (Craig, D'Mello, Witherspoon, Sullins, & Graesser, 2004; D'Mello, Craig, Sullins, & Graesser, 2006; McDaniel et al., 2007). To establish reliability of the annotation, a second FACS-certified annotator independently annotated 36% of the video corpus (5 of 14 dialogues), chosen randomly after stratification by gender and tutor. This annotator followed the same method as the first annotator, pausing the video at any point to tag facial action events. At any given time in the video, the coder was first identifying whether an action unit event existed, and then describing the facial movements that were present. The annotators also specified the beginning and ending time of each event. In this way, the action unit event tags spanned discrete durations of varying length, as specified by the coders. Because the two coders were not required to tag at the same point in time, but rather were permitted the freedom to stop the video at any point where they felt a notable facial action event occurred, calculating agreement between annotators required discretizing the continuous facial action time windows across the tutoring sessions. This discretization was performed at granularities of 1/4, 1/2, 3/4, and 1 second, and inter-rater reliability was calculated at each level of granularity (Table 3). Windows in which both annotators agreed that no facial action event was present were tagged by default as neutral. Figure 1 illustrates facial expressions that display facial Action Unit 4. Table 3. Kappa values for inter-annotator agreement on facial action events Granularity ¼ sec ½ sec ¾ sec 1 sec Presence of AU4 (Brow Lowerer) .84 .87 .86 .86 Figure 1. Facial expressions displaying AU4 (Brow Lowerer) Despite the fact that promising automatic approaches exist to identifying many facial action units (Bartlett et al., 2006; Cohn, Reed, Ambadar, Xiao, & Moriyama, 2004; Pantic & Bartlett, 2007; Zeng, Pantic, Roisman, & Huang, 2009), manual annotation was selected for this project for two reasons. First, manual annotation is more robust than automatic recognition of facial action units, and manual annotation facilitated an exploratory, comprehensive view of student facial expressions during learning through task-oriented dialogue. Although a detailed discussion of the other emotions present in the corpus is beyond the scope of this paper, Figure 2 illustrates some other spontaneous student facial expressions that differ from those associated with confusion. 1194 Figure 2. Other facial expressions from the corpus 5 Models The goal of the modeling experiment was to determine whether the addition of confusion-related facial expression features significantly boosts dialogue act classification accuracy for student utterances. 5.1 Features We take a vector-based approach, in which the features consist of the following: Utterance Features • Dialogue act features: Manually annotated dialogue act for the past three utterances. These features include tutor dialogue acts, annotated with a scheme analogous to that used to annotate student utterances (Boyer et al., 2009). • Speaker: Speaker for past three utterances • Lexical features: Word unigrams • Syntactic features: Top-most syntactic node and its first two children Task-based Features • Subtask: Hierarchical subtask structure for past three task actions (semantic programming actions taken by student) • Correctness: Correctness of past three task actions taken by student • Preceded by task: Indicator for whether the most recent task action immediately preceded the target utterance, or whether it was immediately preceded by the last dialogue move Facial Expression Features • AU4_1sec: Indicator for the display of the brow lowerer within 1 second prior to this utterance being sent, for the most recent three utterances • AU4_5sec: Indicator for the display of the brow lowerer within 5 seconds prior to this utterance being sent, for the most recent three utterances • AU4_10sec: Indicator for the display of the brow lowerer within 10 seconds prior to this utterance being sent, for the most recent three utterances 5.2 Modeling Approach A logistic regression approach was used to classify the dialogue acts based on the above feature vectors. The Weka machine learning toolkit (Hall et al., 2009) was used to learn the models and to first perform feature selection in a best-first search. Logistic regression is a generalized maximum likelihood model that discriminates between pairs of output values by calculating a feature weight vector over the predictors. The goal of this work is to explore the utility of confusion-related facial features in the context of particular dialogue act types. For this reason, a specialized classifier was learned by dialogue act. 5.3 Classification Results The classification accuracy and kappa for each specialized classifier is displayed in Table 4. Note that kappa statistics adjust for the accuracy that would be expected by majority-baseline chance; a kappa statistic of zero indicates that the classifier performed equal to chance, and a positive kappa statistic indicates that the classifier performed better than chance. A kappa of 1 constitutes perfect agreement. As the table illustrates, the feature selection chose to utilize the AU4 feature for every dialogue act except STATEMENT (S). When considering the accuracy of the model across the ten folds, two of the affect-enriched classifiers exhibited statistically significantly better performance. For GROUNDING (G) and REQUEST FOR FEEDBACK (RF), the facial expression features significantly 1195 improved the classification accuracy compared to a model that was learned without affective features. 6 Discussion Dialogue act classification is an essential task for dialogue systems, and it has been addressed with a variety of modeling approaches and feature sets. We have presented a novel approach that treats facial expressions of students as constraining features for an affect-enriched dialogue act classification model in task-oriented tutorial dialogue. The results suggest that knowledge of the student’s confusion-related facial expressions can significantly enhance dialogue act classification for two types of dialogue acts, GROUNDING and REQUEST FOR FEEDBACK. Table 4. Classification accuracy and kappa for specialized DA classifiers. Statistically significant differences (across ten folds, one-tailed t-test) are shown in bold. Classifier with AU4 Classifier without AU4 Dialogue Act % acc κ % acc κ pvalue EX 90.7 .62 89.0 .28 >.05 G 92.6 .76 91 .71 .018 P 93 .49 92.2 .40 >.05 Q 94.6 .72 94.2 .72 >.05 S Not chosen in feat. sel. 93 .22 n/a RF 90.7 .62 88.3 .53 .003 RSP 93 .68 95 .75 >.05 NE * * N * * PE * * U * * UE * * *Too few instances for ten-fold cross-validation. 6.1 Features Selected for Classification Out of more than 1500 features available during feature selection, each of the specialized dialogue act classifiers selected between 30 and 50 features in each condition (with and without affect features). To gain insight into the specific features that were useful for classifying these dialogue acts, it is useful to examine which of the AU4 history features were chosen during feature selection. For GROUNDING, features that indicated the presence of absence of AU4 in the immediately preceding utterance, either at the 1 second or 5 second granularity, were selected. Absence of this confusion-related facial action unit was associated with a higher probability of a grounding act, such as an acknowledgement. This finding is consistent with our understanding of how students and tutors interacted in this corpus; when a student experienced confusion, she would be unlikely to then make a simple grounding dialogue move, but instead would tend to inspect her computer program, ask a question, or wait for the tutor to explain more. For REQUEST FOR FEEDBACK, the predictive features were presence or absence of AU4 within ten seconds of the longest available history (three turns in the past), as well as the presence of AU4 within five seconds of the current utterance (the utterance whose dialogue act is being classified). This finding suggests that there may be some lag between the student experiencing confusion and then choosing to make a request for feedback, and that the confusion-related facial expressions may re-emerge as the student is making a request for feedback, since the five-second window prior to the student sending the textual dialogue message would overlap with the student’s construction of the message itself. Although the improvements seen with AU4 features for QUESTION, POSITIVE FEEDBACK, and EXTRA-DOMAIN acts were not statistically reliable, examining the AU4 features that were selected for classifying these moves points toward ways in which facial expressions may influence classification of these acts (Table 5). 1196 Table 5. Number of features, and AU4 features selected, for specialized DA classifiers Dialogue Act # features selected AU4 features selected G 43 One utterance ago: AU4_1sec, AU4_5sec RF 37 Three utterances ago: AU4_10sec Target utterance: AU4_5sec EX 50 Three utterances ago: AU4_1sec P 36 Current utterance: AU4_10sec Q 30 One utterance ago: AU4_5sec 6.2 Implications The results presented here demonstrate that leveraging knowledge of user affect, in particular of spontaneous facial expressions, may improve the performance of dialogue act classification models. Perhaps most interestingly, displays of confusionrelated facial actions prior to a student dialogue move enabled an affect-enriched classifier to recognize requests for feedback with significantly greater accuracy than a classifier that did not have access to the facial action features. Feedback is known to be a key component of effective tutorial dialogue, through which tutors provide adaptive help (Shute, 2008). Requesting feedback also seems to be an important behavior of students, characteristically engaged in more frequently by women than men, and more frequently by students with lower incoming knowledge than by students with higher incoming knowledge (Boyer, Vouk, & Lester, 2007). 6.3 Limitations The experiments reported here have several notable limitations. First, the time-consuming nature of manual facial action tagging restricted the number of dialogues that could be tagged. Although the highest quality videos were selected for annotation, other medium quality videos would have been sufficiently clear to permit tagging, which would have increased the sample size and likely revealed statistically significant trends. For example, the performance of the affect-enriched classifier was better for dialogue acts of interest such as positive feedback and questions, but this difference was not statistically reliable. An additional limitation stems from the more fundamental question of which affective states are indicated by particular external displays. The field is only just beginning to understand facial expressions during learning and to correlate these facial actions with emotions. Additional research into the “ground truth” of emotion expression will shed additional light on this area. Finally, the results of manual facial action annotation may constitute upper-bound findings for applying automatic facial expression analysis to dialogue act classification. 7 Conclusions and Future Work Emotion plays a vital role in human interactions. In particular, the role of facial expressions in humanhuman dialogue is widely recognized. Facial expressions offer a promising channel for understanding the emotions experienced by users of dialogue systems, particularly given the ubiquity of webcam technologies and the increasing number of dialogue systems that are deployed on webcamenabled devices. This paper has reported on a first step toward using knowledge of user facial expressions to improve a dialogue act classification model for tutorial dialogue, and the results demonstrate that facial expressions hold great promise for distinguishing the pedagogically relevant dialogue act REQUEST FOR FEEDBACK, and the conversational moves of GROUNDING. These early findings highlight the importance of future work in this area. Dialogue act classification models have not fully leveraged some of the techniques emerging from work on sentiment analysis. These approaches may prove particularly useful for identifying emotions in dialogue utterances. Another important direction for future work involves more fully exploring the ways in which affect expression differs between textual and spoken dialogue. Finally, as automatic facial tagging technologies mature, they may prove powerful enough to enable broadly deployed dialogue systems to feasibly leverage facial expression data in the near future. 1197 Acknowledgments This work is supported in part by the North Carolina State University Department of Computer Science and by the National Science Foundation through Grants REC-0632450, IIS-0812291, DRL1007962 and the STARS Alliance Grant CNS0739216. Any opinions, findings, conclusions, or recommendations expressed in this report are those of the participants, and do not necessarily represent the official views, opinions, or policy of the National Science Foundation. References A. Andreevskaia and S. Bergler. 2008. When specialists and generalists work together: Overcoming domain dependence in sentiment tagging. Proceedings of the Annual Meeting of the Association for Computational Linguistics and Human Language Technologies (ACL HLT), 290-298. M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan. 2006. Fully Automatic Facial Action Recognition in Spontaneous Behavior. 7th International Conference on Automatic Face and Gesture Recognition (FGR06), 223-230. K.E. Boyer, M. Vouk, and J.C. Lester. 2007. The influence of learner characteristics on task-oriented tutorial dialogue. Proceedings of the International Conference on Artificial Intelligence in Education, 365–372. K.E. Boyer, E.Y. Ha, R. Phillips, M.D. Wallis, M. Vouk, and J.C. Lester. 2010. Dialogue act modeling in a complex task-oriented domain. Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), 297-305. K.E. Boyer, R. Phillips, E.Y. Ha, M.D. Wallis, M.A. Vouk, and J.C. Lester. 2009. Modeling dialogue structure with adjacency pair analysis and hidden Markov models. Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Short Papers, 49-52. K.E. Boyer, R. Phillips, E.Y. Ha, M.D. Wallis, M.A. Vouk, and J.C. Lester. 2010. Leveraging hidden dialogue state to select tutorial moves. Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, 66-73. R.A. Calvo and S. D’Mello. 2010. Affect Detection: An Interdisciplinary Review of Models, Methods, and Their Applications. IEEE Transactions on Affective Computing, 1(1): 18-37. M. Cavazza, R.S.D.L. Cámara, M. Turunen, J. Gil, J. Hakulinen, N. Crook, et al. 2010. How was your day? An affective companion ECA prototype. Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), 277-280. F. Cavicchio. 2009. The modulation of cooperation and emotion in dialogue: the REC Corpus. Proceedings of the ACL-IJCNLP 2009 Student Research Workshop, 43-48. J.F. Cohn, L.I. Reed, Z. Ambadar, J. Xiao, and T. Moriyama. 2004. Automatic Analysis and Recognition of Brow Actions and Head Motion in Spontaneous Facial Behavior. IEEE International Conference on Systems, Man and Cybernetics, 610-616. S.D. Craig, S. D’Mello, A. Witherspoon, J. Sullins, and A.C. Graesser. 2004. Emotions during learning: The first steps toward an affect sensitive intelligent tutoring system. In J. Nall and R. Robson (Eds.), E-learn 2004: World conference on Elearning in Corporate, Government, Healthcare, & Higher Education, 241-250. D. Das and S. Bandyopadhyay. 2009. Word to sentence level emotion tagging for Bengali blogs. Proceedings of the ACL-IJCNLP Conference, Short Papers, 149-152. S. Dasgupta and V. Ng. 2009. Mine the easy, classify the hard: a semi-supervised approach to automatic sentiment classification. Proceedings of the 46th Annual Meeting of the ACL and the 4th IJCNLP, 701-709. B. Di Eugenio, Z. Xie, and R. Serafin. 2010. Dialogue Act Classification, Higher Order Dialogue Structure, and Instance-Based Learning. Dialogue & Discourse, 1(2): 1-24. M. Dzikovska, J.D. Moore, N. Steinhauser, and G. Campbell. 2010. The impact of interpretation problems on tutorial dialogue. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, Short Papers, 43-48. S. D’Mello, S.D. Craig, J. Sullins, and A.C. Graesser. 2006. Predicting Affective States expressed through an Emote-Aloud Procedure from AutoTutor’s Mixed- Initiative Dialogue. International Journal of Artificial Intelligence in Education, 16(1): 3-28. P. Ekman. 1999. Basic Emotions. In T. Dalgleish and M. J. Power (Eds.), Handbook of Cognition and Emotion. New York: Wiley. P. Ekman, W.V. Friesen. 1978. Facial Action Coding System. Palo Alto, CA: Consulting Psychologists Press. P. Ekman, W.V. Friesen, and J.C. Hager. 2002. Facial Action Coding System: Investigator’s Guide. Salt Lake City, USA: A Human Face. 1198 P. Ekman and E.L. Rosenberg (Eds.). 2005. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS) (2nd ed.). New York: Oxford University Press. K. Forbes-Riley, M. Rotaru, D.J. Litman, and J. Tetreault. 2007. Exploring affect-context dependencies for adaptive system development. The Conference of the North American Chapter of the Association for Computational Linguistics and Human Language Technologies (NAACL HLT), Short Papers, 41-44. K. Forbes-Riley, M. Rotaru, D.J. Litman, and J. Tetreault. 2009. Adapting to student uncertainty improves tutoring dialogues. Proceedings of the 14th International Conference on Artificial Intelligence in Education (AIED), 33-40. A.C. Graesser, S. Lu, B. Olde, E. Cooper-Pye, and S. Whitten. 2005. Question asking and eye tracking during cognitive disequilibrium: comprehending illustrated texts on devices when the devices break down. Memory & Cognition, 33(7): 1235-1247. S. Greene and P. Resnik. 2009. More than words: Syntactic packaging and implicit sentiment. Proceedings of the 2009 Annual Conference of the North American Chapter of the ACL and Human Language Technologies (NAACL HLT), 503-511. M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I.H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations, 11(1): 10–18. R. Iida, S. Kobayashi, and T. Tokunaga. 2010. Incorporating extra-linguistic information into reference resolution in collaborative task dialogue. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, 1259-1267. M.L. Knapp and J.A. Hall. 2006. Nonverbal Communication in Human Interaction (6th ed.). Belmont, CA: Wadsworth/Thomson Learning. C.M. Lee, S.S. Narayanan. 2005. Toward detecting emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing, 13(2): 293-303. R. López-Cózar, J. Silovsky, and D. Griol. 2010. F2– New Technique for Recognition of User Emotional States in Spoken Dialogue Systems. Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), 281-288. B.T. McDaniel, S. D’Mello, B.G. King, P. Chipman, K. Tapp, and A.C. Graesser. 2007. Facial Features for Affective State Detection in Learning Environments. Proceedings of the 29th Annual Cognitive Science Society, 467-472. D. McNeill. 1992. Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press. A. Mehrabian. 2007. Nonverbal Communication. New Brunswick, NJ: Aldine Transaction. T. Nguyen. 2010. Mood patterns and affective lexicon access in weblogs. Proceedings of the ACL 2010 Student Research Workshop, 43-48. M. Pantic and M.S. Bartlett. 2007. Machine Analysis of Facial Expressions. In K. Delac and M. Grgic (Eds.), Face Recognition, 377-416. Vienna, Austria: I-Tech Education and Publishing. J.A. Russell. 2003. Core affect and the psychological construction of emotion. Psychological Review, 110(1): 145-172. J.A. Russell, J.A. Bachorowski, and J.M. FernandezDols. 2003. Facial and vocal expressions of emotion. Annual Review of Psychology, 54, 329-49. K.L. Schmidt and J.F. Cohn. 2001. Human Facial Expressions as Adaptations: Evolutionary Questions in Facial Expression Research. Am J Phys Anthropol, 33: 3-24. V.J. Shute. 2008. Focus on Formative Feedback. Review of Educational Research, 78(1): 153-189. V.K.R Sridar, S. Bangalore, and S.S. Narayanan. 2009. Combining lexical, syntactic and prosodic cues for improved online dialog act tagging. Computer Speech & Language, 23(4): 407-422. Elsevier Ltd. A. Stolcke, K. Ries, N. Coccaro, E. Shriberg, R. Bates, D. Jurafsky, et al. 2000. Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech. Computational Linguistics, 26(3): 339-373. C. Toprak, N. Jakob, and I. Gurevych. 2010. Sentence and expression level annotation of opinions in user-generated discourse. Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, 575-584. T. Wilson, J. Wiebe, and P. Hoffmann. 2009. Recognizing Contextual Polarity: An Exploration of Features for Phrase-Level Sentiment Analysis. Computational Linguistics, 35(3): 399-433. Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang. 2009. A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(1): 39-58. 1199
2011
119
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 112–122, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Query Weighting for Ranking Model Adaptation Peng Cai1, Wei Gao2, Aoying Zhou1, and Kam-Fai Wong2,3 1East China Normal University, Shanghai, China [email protected], [email protected] 2The Chinese University of Hong Kong, Shatin, N.T., Hong Kong {wgao, kfwong}@se.cuhk.edu.hk 3Key Laboratory of High Confidence Software Technologies, Ministry of Education, China Abstract We propose to directly measure the importance of queries in the source domain to the target domain where no rank labels of documents are available, which is referred to as query weighting. Query weighting is a key step in ranking model adaptation. As the learning object of ranking algorithms is divided by query instances, we argue that it’s more reasonable to conduct importance weighting at query level than document level. We present two query weighting schemes. The first compresses the query into a query feature vector, which aggregates all document instances in the same query, and then conducts query weighting based on the query feature vector. This method can efficiently estimate query importance by compressing query data, but the potential risk is information loss resulted from the compression. The second measures the similarity between the source query and each target query, and then combines these fine-grained similarity values for its importance estimation. Adaptation experiments on LETOR3.0 data set demonstrate that query weighting significantly outperforms document instance weighting methods. 1 Introduction Learning to rank, which aims at ranking documents in terms of their relevance to user’s query, has been widely studied in machine learning and information retrieval communities (Herbrich et al., 2000; Freund et al., 2004; Burges et al., 2005; Yue et al., 2007; Cao et al., 2007; Liu, 2009). In general, large amount of training data need to be annotated by domain experts for achieving better ranking performance. In real applications, however, it is time consuming and expensive to annotate training data for each search domain. To alleviate the lack of training data in the target domain, many researchers have proposed to transfer ranking knowledge from the source domain with plenty of labeled data to the target domain where only a few or no labeled data is available, which is known as ranking model adaptation (Chen et al., 2008a; Chen et al., 2010; Chen et al., 2008b; Geng et al., 2009; Gao et al., 2009). Intuitively, the more similar an source instance is to the target instances, it is expected to be more useful for cross-domain knowledge transfer. This motivated the popular domain adaptation solution based on instance weighting, which assigns larger weights to those transferable instances so that the model trained on the source domain can adapt more effectively to the target domain (Jiang and Zhai, 2007). Existing instance weighting schemes mainly focus on the adaptation problem for classification (Zadrozny, 2004; Huang et al., 2007; Jiang and Zhai, 2007; Sugiyama et al., 2008). Although instance weighting scheme may be applied to documents for ranking model adaptation, the difference between classification and learning to rank should be highlighted to take careful consideration. Compared to classification, the learning object for ranking is essentially a query, which contains a list of document instances each with a relevance judgement. Recently, researchers proposed listwise ranking algorithms (Yue et al., 2007; Cao et al., 2007) to take the whole query as a learning object. The benchmark evaluation showed that list112 Target domain Source Domain d1 (s1) d2 (s1) d3 (s1) d1 (s2) d2 (s2) d3 (s2) d2 (t1) d1 (t2) d2 (t2) d3 (t2) d3 (t1) d1 (t1) (a) Instance based weighting d2 (s1) d1 (s1) d3 (s1) d1 (s2) d2 (s2) d3 (s2) qs2 qs1 d3 (t1) d2 (t1) d1 (t1) d1 (t2) d2 (t2) d3 (t2) qt1 qt2 Target domain Source Domain (b) Query based weighting Figure 1: The information about which document instances belong to the same query is lost in document instance weighting scheme. To avoid losing this information, query weighting takes the query as a whole and directly measures its importance. wise approach significantly outperformed pointwise approach, which takes each document instance as independent learning object, as well as pairwise approach, which concentrates learning on the order of a pair of documents (Liu, 2009). Inspired by the principle of listwise approach, we hypothesize that the importance weighting for ranking model adaptation could be done better at query level rather than document level. Figure 1 demonstrates the difference between instance weighting and query weighting, where there are two queries qs1 and qs2 in the source domain and qt1 and qt2 in the target domain, respectively, and each query has three retrieved documents. In Figure 1(a), source and target domains are represented as a bag of document instances. It is worth noting that the information about which document instances belong to the same query is lost. To avoid this information loss, query weighting scheme shown as Figure 1(b) directly measures importance weight at query level. Instance weighting makes the importance estimation of document instances inaccurate when documents of the same source query are similar to the documents from different target queries. Take Figure 2 as a toy example, where the document instance is represented as a feature vector with four features. No matter what weighting schemes are used, it makes sense to assign high weights to source queries qs1 and qs2 because they are similar to target queries qt1 and qt2, respectively. Meanwhile, the source query qs3 should be weighted lower because <d1 s1>=( 5, 1, 0 ,0 ) <d2 s1>=( 6, 2, 0 ,0 ) <d1 s2>=( 0, 0, 5, 1) <d2 s2>=( 0, 0, 6, 2) <d1 s3>=( 5, 1, 0, 0) <d2 s3>=( 0, 0, 6, 2) <d1 t1>=(5, 1, 0 ,0 ) <d2 t1>=(6, 2, 0 ,0 ) <d1 t2>=( 0, 0, 5, 1) <d2 t2>=( 0, 0, 6, 2) qs1 qs2 qs3 qt1 qt2 Figure 2: A toy example showing the problem of document instance weighting scheme. it’s not quite similar to any of qt1 and qt2 at query level, meaning that the ranking knowledge from qs3 is different from that of qt1 and qt2 and thus less useful for the transfer to the target domain. Unfortunately, the three source queries qs1, qs2 and qs3 would be weighted equally by document instance weighting scheme. The reason is that all of their documents are similar to the two document instances in target domain despite the fact that the documents of qs3 correspond to their counterparts from different target queries. Therefore, we should consider the source query as a whole and directly measure the query importance. However, it’s not trivial to directly estimate 113 a query’s weight because a query is essentially provided as a matrix where each row represents a vector of document features. In this work, we present two simple but very effective approaches attempting to resolve the problem from distinct perspectives: (1) we compress each query into a query feature vector by aggregating all of its document instances, and then conduct query weighting on these query feature vectors; (2) we measure the similarity between the source query and each target query one by one, and then combine these fine-grained similarity values to calculate its importance to the target domain. 2 Instance Weighting Scheme Review The basic idea of instance weighting is to put larger weights on source instances which are more similar to target domain. As a result, the key problem is how to accurately estimate the instance’s weight indicating its importance to target domain. (Jiang and Zhai, 2007) used a small number of labeled data from target domain to weight source instances. Recently, some researchers proposed to weight source instance only using unlabeled target instances (Shimodaira, 2000; Sugiyama et al., 2008; Huang et al., 2007; Zadrozny, 2004; Gao et al., 2010). In this work, we also focus on weighting source queries only using unlabeled target queries. (Gao et al., 2010; Ben-David et al., 2010) proposed to use a classification hyperplane to separate source instances from target instances. With the domain separator, the probability that a source instance is classified to target domain can be used as the importance weight. Other instance weighting methods were proposed for the sample selection bias or covariate shift in the more general setting of classifier learning (Shimodaira, 2000; Sugiyama et al., 2008; Huang et al., 2007; Zadrozny, 2004). (Sugiyama et al., 2008) used a natural model selection procedure, referred to as Kullback-Leibler divergence Importance Estimation Procedure (KLIEP), for automatically tuning parameters, and showed that its importance estimation was more accurate. The main idea is to directly estimate the density function ratio of target distribution pt(x) to source distribution ps(x), i.e. w(x) = pt(x) ps(x). Then model w(x) can be used to estimate the importance of source instances. Model parameters were computed with a linear model by minimizing the KL-divergence from pt(x) to its estimator ˆpt(x). Since ˆpt(x) = ˆw(x)ps(x), the ultimate objective only contains model ˆw(x). For using instance weighting in pairwise ranking algorithms, the weights of document instances should be transformed into those of document pairs (Gao et al., 2010). Given a pair of documents ⟨xi, xj⟩and their weights wi and wj, the pairwise weight wij could be estimated probabilistically as wi ∗wj. To consider query factor, query weight was further estimated as the average value of the weights over all the pairs, i.e., wq = 1 M ∑ i,j wij, where M is the number of pairs in query q. Additionally, to take the advantage of both query and document information, a probabilistic weighting for ⟨xi, xj⟩was modeled by wq ∗wij. Through the transformation, instance weighting schemes for classification can be applied to ranking model adaptation. 3 Query Weighting In this section, we extend instance weighting to directly estimate query importance for more effective ranking model adaptation. We present two query weighting methods from different perspectives. Note that although our methods are based on domain separator scheme, other instance weighting schemes such as KLIEP (Sugiyama et al., 2008) can also be extended similarly. 3.1 Query Weighting by Document Feature Aggregation Our first query weighting method is inspired by the recent work on local learning for ranking (Geng et al., 2008; Banerjee et al., 2009). The query can be compressed into a query feature vector, where each feature value is obtained by the aggregate of its corresponding features of all documents in the query. We concatenate two types of aggregates to construct the query feature vector: the mean ⃗µ = 1 |q| ∑|q| i=1 ⃗fi and the variance ⃗σ = 1 |q| ∑|q| i=1(⃗fi −⃗µ)2, where ⃗fi is the feature vector of document i and |q| denotes the number of documents in q . Based on the aggregation of documents within each query, we can use a domain separator to directly weight the source queries with the set of queries from both domains. Given query data sets Ds = {qi s}m i=1 and Dt = {qj t }n j=1 respectively from the source and target do114 Algorithm 1 Query Weighting Based on Document Feature Aggregation in the Query Input: Queries in the source domain, Ds = {qi s}m i=1; Queries in the target domain, Dt = {qj t }n j=1; Output: Importance weights of queries in the source domain, IWs = {Wi}m i=1; 1: ys = −1, yt = +1; 2: for i = 1; i ≤m; i + + do 3: Calculate the mean vector ⃗µi and variance vector ⃗σi for qi s; 4: Add query feature vector ⃗qi s = (⃗µi,⃗σi, ys) to D′ s ; 5: end for 6: for j = 1; j ≤n; j + + do 7: Calculate the mean vector ⃗µj and variance vector ⃗σj for qj t ; 8: Add query feature vector ⃗qj t = (⃗µj,⃗σj, yt) to D′ t; 9: end for 10: Find classification hyperplane Hst which separates D′ s from D′ t; 11: for i = 1; i ≤m; i + + do 12: Calculate the distance of ⃗qi s to Hst, denoted as L(⃗qi s); 13: Wi = P(qi s ∈Dt) = 1 1+exp(α∗L(⃗qis)+β) 14: Add Wi to IWs; 15: end for 16: return IWs; mains, we use algorithm 1 to estimate the probability that the query qi s can be classified to Dt, i.e. P(qi s ∈Dt), which can be used as the importance of qi s relative to the target domain. From step 1 to 9, D′ s and D′ t are constructed using query feature vectors from source and target domains. Then, a classification hyperplane Hst is used to separate D′ s from D′ t in step 10. The distance of the query feature vector ⃗qi s from Hst are transformed to the probability P(qi s ∈Dt) using a sigmoid function (Platt and Platt, 1999). 3.2 Query Weighting by Comparing Queries across Domains Although the query feature vector in algorithm 1 can approximate a query by aggregating its documents’ features, it potentially fails to capture important feature information due to the averaging effect during the aggregation. For example, the merit of features in some influential documents may be canceled out in the mean-variance calculation, resulting in many distorted feature values in the query feature vector that hurts the accuracy of query classification hyperplane. This urges us to propose another query weighting method from a different perspective of query similarity. Intuitively, the importance of a source query to the target domain is determined by its overall similarity to every target query. Based on this intuition, we leverage domain separator to measure the similarity between a source query and each one of the target queries, where an individual domain separator is created for each pair of queries. We estimate the weight of a source query using algorithm 2. Note that we assume document instances in the same query are conditionally independent and all queries are independent of each other. In step 3, D′ qis is constructed by all the document instances {⃗xk} in query qi s with the domain label ys. For each target query qj t , we use the classification hyperplane Hij to estimate P(⃗xk ∈D′ qj t ), i.e. the probability that each document ⃗xk of qi s is classified into the document set of qj t (step 8). Then the similarity between qi s and qj t is measured by the probability P(qi s ∼qj t ) at step 9. Finally, the probability of qi s belonging to the target domain P(qi s ∈Dt) is calculated at step 11. It can be expected that algorithm 2 will generate 115 Algorithm 2 Query Weighting by Comparing Source and Target Queries Input: Queries in source domain, Ds = {qi s}m i=1; Queries in target domain, Dt = {qj t }n j=1; Output: Importance weights of queries in source domain, IWs = {Wi}m i=1; 1: ys = −1, yt = +1; 2: for i = 1; i ≤m; i + + do 3: Set D′ qis={⃗xk, ys)}|qi s| k=1; 4: for j = 1; j ≤n; j + + do 5: Set D′ qj t ={⃗xk′, yt)}|qj t | k′=1; 6: Find a classification hyperplane Hij which separates D′ qis from D′ qj t ; 7: For each k, calculate the distance of ⃗xk to Hij, denoted as L(⃗xk); 8: For each k, calculate P(⃗xk ∈D′ qj t ) = 1 1+exp(α∗L(⃗xk)+β); 9: Calculate P(qi s ∼qj t ) = 1 |qis| ∑|qi s| k=1 P(⃗xk ∈D′ qj t ); 10: end for 11: Add Wi = P(qi s ∈Dt) = 1 n ∑n j=1 P(qi s ∼qj t ) to IWs; 12: end for 13: return IWs; more precise measures of query similarity by utilizing the more fine-grained classification hyperplane for separating the queries of two domains. 4 Ranking Model Adaptation via Query Weighting To adapt the source ranking model to the target domain, we need to incorporate query weights into existing ranking algorithms. Note that query weights can be integrated with either pairwise or listwise algorithms. For pairwise algorithms, a straightforward way is to assign the query weight to all the document pairs associated with this query. However, document instance weighting cannot be appropriately utilized in listwise approach. In order to compare query weighting with document instance weighting, we need to fairly apply them for the same approach of ranking. Therefore, we choose pairwise approach to incorporate query weighting. In this section, we extend Ranking SVM (RSVM) (Herbrich et al., 2000; Joachims, 2002) — one of the typical pairwise algorithms for this. Let’s assume there are m queries in the data set of source domain, and for each query qi there are ℓ(qi) number of meaningful document pairs that can be constructed based on the ground truth rank labels. Given ranking function f, the objective of RSVM is presented as follows: min1 2||⃗w||2 + C m ∑ i=1 ℓ(qi) ∑ j=1 ξij (1) subject to zij ∗f(⃗w, ⃗xj(1) qi −⃗xj(2) qi ) ≥1 −ξij ξij ≥0, i = 1, . . . , m; j = 1, . . . , ℓ(qi) where ⃗xj(1) qi and ⃗xj(2) qi are two documents with different rank label, and zij = +1 if ⃗xj(1) qi is labeled more relevant than ⃗xj(2) qi ; or zij = −1 otherwise. Let λ = 1 2C and replace ξij with Hinge Loss function (.)+, Equation 1 can be turned to the following form: min λ||⃗w||2+ m ∑ i=1 ℓ(qi) ∑ j=1 ( 1 −zij ∗f(⃗w, ⃗xj(1) qi −⃗xj(2) qi ) )+ (2) Let IW(qi) represent the importance weight of source query qi. Equation 2 is extended for integrating the query weight into the loss function in a 116 straightforward way: min λ||⃗w||2+ m ∑ i=1 IW(qi) ∗ ℓ(qi) ∑ j=1 ( 1 −zij ∗f(⃗w, ⃗xj(1) qi −⃗xj(2) qi ) )+ where IW(.) takes any one of the weighting schemes given by algorithm 1 and algorithm 2. 5 Evaluation We evaluated the proposed two query weighting methods on TREC-2003 and TREC-2004 web track datasets, which were released through LETOR3.0 as a benchmark collection for learning to rank by (Qin et al., 2010). Originally, different query tasks were defined on different parts of data in the collection, which can be considered as different domains for us. Adaptation takes place when ranking tasks are performed by using the models trained on the domains in which they were originally defined to rank the documents in other domains. Our goal is to demonstrate that query weighting can be more effective than the state-of-the-art document instance weighting. 5.1 Datasets and Setup Three query tasks were defined in TREC-2003 and TREC-2004 web track, which are home page finding (HP), named page finding (NP) and topic distillation (TD) (Voorhees, 2003; Voorhees, 2004). In this dataset, each document instance is represented by 64 features, including low-level features such as term frequency, inverse document frequency and document length, and high-level features such as BM25, language-modeling, PageRank and HITS. The number of queries of each task is given in Table 1. The baseline ranking model is an RSVM directly trained on the source domain without using any weighting methods, denoted as no-weight. We implemented two weighting measures based on domain separator and Kullback-Leibler divergence, referred to DS and KL, respectively. In DS measure, three document instance weighting methods based on probability principle (Gao et al., 2010) were implemented for comparison, denoted as doc-pair, doc-avg and doc-comb (see Section 2). In KL measure, there is no probabilistic meaning for KL weight Query Task TREC 2003 TREC 2004 Topic Distillation 50 75 Home Page finding 150 75 Named Page finding 150 75 Table 1: The number of queries in TREC-2003 and TREC-2004 web track and the doc-comb based on KL is not interpretable, and we only present the results of doc-pair and docavg for KL measure. Our proposed query weighting methods are denoted by query-aggr and querycomp, corresponding to document feature aggregation in query and query comparison across domains, respectively. All ranking models above were trained only on source domain training data and the labeled data of target domain was just used for testing. For training the models efficiently, we implemented RSVM with Stochastic Gradient Descent (SGD) optimizer (Shalev-Shwartz et al., 2007). The reported performance is obtained by five-fold cross validation. 5.2 Experimental Results The task of HP and NP are more similar to each other whereas HP/NP is rather different from TD (Voorhees, 2003; Voorhees, 2004). Thus, we carried out HP/NP to TD and TD to HP/NP ranking adaptation tasks. Mean Average Precision (MAP) (Baeza-Yates and Ribeiro-Neto, 1999) is used as the ranking performance measure. 5.2.1 Adaptation from HP/NP to TD The first set of experiments performed adaptation from HP to TD and NP to TD. The results of MAP are shown in Table 2. For the DS-based measure, as shown in the table, query-aggr works mostly better than no-weight,docpair, doc-avg and doc-comb, and query-comp performs the best among the five weighting methods. T-test on MAP indicates that the improvement of query-aggr over no-weight is statistically significant on two adaptation tasks while the improvement of document instance weighting over no-weight is statistically significant only on one task. All of the improvement of query-comp over no-weight, docpair,doc-avg and doc-comb are statistically significant. This demonstrates the effectiveness of query 117 Model Weighting method HP03 to TD03 HP04 to TD04 NP03 to TD03 NP04 to TD04 no-weight 0.2508 0.2086 0.1936 0.1756 DS doc-pair 0.2505 0.2042 0.1982† 0.1708 doc-avg 0.2514 0.2019 0.2122†‡ 0.1716 doc-comb 0.2562 0.2051 0.2224†‡♯ 0.1793 query-aggr 0.2573 0.2106†‡♯ 0.2088 0.1808†‡♯ query-comp 0.2816†‡♯ 0.2147†‡♯ 0.2392†‡♯ 0.1861†‡♯ KL doc-pair 0.2521 0.2048 0.1901 0.1761 doc-avg 0.2534 0.2127† 0.1904 0.1777 doc-comb query-aggr 0.1890 0.1901 0.1870 0.1643 query-comp 0.2548† 0.2142† 0.2313†‡♯ 0.1807† Table 2: Results of MAP for HP/NP to TD adaptation. †, ‡, ♯and boldface indicate significantly better than no-weight, doc-pair, doc-avg and doc-comb, respectively. Confidence level is set at 95% weighting compared to document instance weighting. Furthermore, query-comp can perform better than query-aggr. The reason is that although document feature aggregation might be a reasonable representation for a set of document instances, it is possible that some information could be lost or distorted in the process of compression. By contrast, more accurate query weights can be achieved by the more fine-grained similarity measure between the source query and all target queries in algorithm 2. For the KL-based measure, similar observation can be obtained. However, it’s obvious that DSbased models can work better than the KL-based. The reason is that KL conducts weighting by density function ratio which is sensitive to the data scale. Specifically, after document feature aggregation, the number of query feature vectors in all adaptation tasks is no more than 150 in source and target domains. It renders the density estimation in queryaggr is very inaccurate since the set of samples is too small. As each query contains 1000 documents, they seemed to provide query-comp enough samples for achieving reasonable estimation of the density functions in both domains. 5.2.2 Adaptation from TD to HP/NP To further validate the effectiveness of query weighting, we also conducted adaptation from TD to HP and TD to NP . MAP results with significant test are shown in Table 3. We can see that document instance weighting schemes including doc-pair, doc-avg and doc-comb can not outperform no-weight based on MAP measure. The reason is that each query in TD has 1000 retrieved documents in which 10-15 documents are relevant whereas each query in HP or NP only consists 1-2 relevant documents. Thus, when TD serves as the source domain, it leads to the problem that too many document pairs were generated for training the RSVM model. In this case, a small number of documents that were weighted inaccurately can make significant impact on many number of document pairs. Since query weighting method directly estimates the query importance instead of document instance importance, both query-aggr and querycomp can avoid such kind of negative influence that is inevitable in the three document instance weighting methods. 5.2.3 The Analysis on Source Query Weights An interesting problem is which queries in the source domain are assigned high weights and why it’s the case. Query weighting assigns each source query with a weight value. Note that it’s not meaningful to directly compare absolute weight values between query-aggr and query-comp because source query weights from distinct weighting methods have different range and scale. However, it is feasible to compare the weights with the same weighting method. Intuitively, if the ranking model learned from a source query can work well in target domain, it should get high weight. According to this intuition, if ranking models fq1s and fq2s are learned 118 model weighting scheme TD03 to HP03 TD04 to HP04 TD03 to NP03 TD04 to NP04 no-weight 0.6986 0.6158 0.5053 0.5427 DS doc-pair 0.6588 0.6235† 0.4878 0.5212 doc-avg 0.6654 0.6200 0.4736 0.5035 doc-comb 0.6932 0.6214† 0.4974 0.5077 query-aggr 0.7179†‡♯ 0.6292†‡♯ 0.5198†‡♯ 0.5551†‡♯ query-comp 0.7297†‡♯ 0.6499†‡♯ 0.5203†‡♯ 0.6541†‡♯ KL doc-pair 0.6480 0.6107 0.4633 0.5413 doc-avg 0.6472 0.6132 0.4626 0.5406 doc-comb – – – – query-aggr 0.6263 0.5929 0.4597 0.4673 query-comp 0.6530‡♯ 0.6358†‡♯ 0.4726 0.5559†‡♯ Table 3: Results of MAP for TD to HP/NP adaptation. †, ‡, ♯and boldface indicate significantly better than no-weight, doc-pair, doc-avg and doc-comb, respectively. Confidence level is set as 95%. from queries q1 s and q2 s respectively, and fq1s performs better than fq2s, then the source query weight of q1 s should be higher than that of q2 s. For further analysis, we compare the weight values between each source query pair, for which we trained RSVM on each source query and evaluated the learned model on test data from target domain. Then, the source queries are ranked according to the MAP values obtained by their corresponding ranking models. The order is denoted as Rmap. Meanwhile, the source queries are also ranked with respect to their weights estimated by DS-based measure, and the order is denoted as Rweight. We hope Rweight is correlated as positively as possible with Rmap. For comparison, we also ranked these queries according to randomly generated query weights, which is denoted as query-rand in addition to queryaggr and query-comp. The Kendall’s τ = P−Q P+Q is used to measure the correlation (Kendall, 1970), where P is the number of concordant query pairs and Q is the number of discordant pairs. It’s noted that τ’s range is from -1 to 1, and the larger value means the two ranking is better correlated. The Kendall’s τ by different weighting methods are given in Table 4 and 5. We find that Rweight produced by query-aggr and query-comp are all positively correlated with Rmap and clearly the orders generated by query-comp are more positive than those by query-aggr. This is another explanation why query-comp outperforms query-aggr. Furthermore, both are far better than weighting TD03 to HP03 TD04 to HP04 doc-pair 28,835 secs 21,640 secs query-aggr 182 secs 123 secs query-comp 15,056 secs 10,081 secs Table 6: The efficiency of weighting in seconds. query-rand because the Rweight by query-rand is actually independent of Rmap. 5.2.4 Efficiency In the situation where there are large scale data in source and target domains, how to efficiently weight a source query is another interesting problem. Without the loss of generality, we reported the weighting time of doc-pair, query-aggr and query-comp from adaptation from TD to HP using DS measure. As doc-avg and doc-comb are derived from doc-pair, their efficiency is equivalent to doc-pair. As shown in table 6, query-aggr can efficiently weight query using query feature vector. The reason is two-fold: one is the operation of query document aggregation can be done very fast, and the other is there are 1000 documents in each query of TD or HP, which means that the compression ratio is 1000:1. Thus, the domain separator can be found quickly. In addition, query-comp is more efficient than doc-pair because doc-pair needs too much time to find the separator using all instances from source and target domain. And query-comp uses a divide-and-conquer method to measure the similarity of source query to each target query, and then efficiently combine these 119 Weighting method HP03 to TD03 HP04 to TD04 NP03 to TD03 NP04 to TD04 query-aggr 0.0906 0.0280 0.0247 0.0525 query-comp 0.1001 0.0804 0.0711 0.1737 query-rand 0.0041 0.0008 -0.0127 0.0163 Table 4: The Kendall’s τ of Rweight and Rmap in HP/NP to TD adaptation. Weighting method TD03 to HP03 TD04 to HP04 TD03 to NP03 TD04 to NP04 query-aggr 0.1172 0.0121 0.0574 0.0464 query-comp 0.1304 0.1393 0.1586 0.0545 query-rand −0.0291 0.0022 0.0161 -0.0262 Table 5: The Kendall’s τ of Rweight and Rmap in TD to HP/NP adaptation. fine-grained similarity values. 6 Related Work Cross-domain knowledge transfer has became an important topic in machine learning and natural language processing (Ben-David et al., 2010; Jiang and Zhai, 2007; Blitzer et al., 2006; Daum´e III and Marcu, 2006). (Blitzer et al., 2006) proposed model adaptation using pivot features to build structural feature correspondence in two domains. (Pan et al., 2009) proposed to seek a common features space to reduce the distribution difference between the source and target domain. (Daum´e III and Marcu, 2006) assumed training instances were generated from source domain, target domain and crossdomain distributions, and estimated the parameter for the mixture distribution. Recently, domain adaptation in learning to rank received more and more attentions due to the lack of training data in new search domains. Existing ranking adaptation approaches can be grouped into feature-based (Geng et al., 2009; Chen et al., 2008b; Wang et al., 2009; Gao et al., 2009) and instancebased (Chen et al., 2010; Chen et al., 2008a; Gao et al., 2010) approaches. In (Geng et al., 2009; Chen et al., 2008b), the parameters of ranking model trained on the source domain was adjusted with the small set of labeled data in the target domain. (Wang et al., 2009) aimed at ranking adaptation in heterogeneous domains. (Gao et al., 2009) learned ranking models on the source and target domains independently, and then constructed a stronger model by interpolating the two models. (Chen et al., 2010; Chen et al., 2008a) weighted source instances by using small amount of labeled data in the target domain. (Gao et al., 2010) studied instance weighting based on domain separator for learning to rank by only using training data from source domain. In this work, we propose to directly measure the query importance instead of document instance importance by considering information at both levels. 7 Conclusion We introduced two simple yet effective query weighting methods for ranking model adaptation. The first represents a set of document instances within the same query as a query feature vector, and then directly measure the source query importance to the target domain. The second measures the similarity between a source query and each target query, and then combine the fine-grained similarity values to estimate its importance to target domain. We evaluated our approaches on LETOR3.0 dataset for ranking adaptation and found that: (1) the first method efficiently estimate query weights, and can outperform the document instance weighting but some information is lost during the aggregation; (2) the second method consistently and significantly outperforms document instance weighting. 8 Acknowledgement P. Cai and A. Zhou are supported by NSFC (No. 60925008) and 973 program (No. 2010CB731402). W. Gao and K.-F. Wong are supported by national 863 program (No. 2009AA01Z150). We also thank anonymous reviewers for their helpful comments. 120 References Ricardo A. Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. Somnath Banerjee, Avinava Dubey, Jinesh Machchhar, and Soumen Chakrabarti. 2009. Efficient and accurate local learning for ranking. In SIGIR workshop : Learning to rank for information retrieval, pages 1–8. Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning, 79(1-2):151–175. John Blitzer, Ryan Mcdonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of EMNLP. C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. 2005. Learning to rank using gradient descent. In Proceedings of ICML, pages 89–96. Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: from pairwise approach to listwise approach. In Proceedings of ICML, pages 129 – 136. Depin Chen, Jun Yan, Gang Wang, Yan Xiong, Weiguo Fan, and Zheng Chen. 2008a. Transrank: A novel algorithm for transfer of rank learning. In Proceedings of ICDM Workshops, pages 106–115. Keke Chen, Rongqing Lu, C.K. Wong, Gordon Sun, Larry Heck, and Belle Tseng. 2008b. Trada: Tree based ranking function adaptation. In Proceedings of CIKM. Depin Chen, Yan Xiong, Jun Yan, Gui-Rong Xue, Gang Wang, and Zheng Chen. 2010. Knowledge transfer for cross domain learning to rank. Information Retrieval, 13(3):236–253. Hal Daum´e III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26(1):101–126. Y. Freund, R. Iyer, R. Schapire, and Y. Singer. 2004. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933– 969. Jianfeng Gao, Qiang Wu, Chris Burges, Krysta Svore, Yi Su, Nazan Khan, Shalin Shah, and Hongyan Zhou. 2009. Model adaptation via model interpolation and boosting for web search ranking. In Proceedings of EMNLP. Wei Gao, Peng Cai, Kam Fai Wong, and Aoying Zhou. 2010. Learning to rank only using training data from related domain. In Proceedings of SIGIR, pages 162– 169. Xiubo Geng, Tie-Yan Liu, Tao Qin, Andrew Arnold, Hang Li, and Heung-Yeung Shum. 2008. Query dependent ranking using k-nearest neighbor. In Proceedings of SIGIR, pages 115–122. Bo Geng, Linjun Yang, Chao Xu, and Xian-Sheng Hua. 2009. Ranking model adaptation for domain-specific search. In Proceedings of CIKM. R. Herbrich, T. Graepel, and K. Obermayer. 2000. Large Margin Rank Boundaries for Ordinal Regression. MIT Press, Cambridge. Jiayuan Huang, Alexander J. Smola, Arthur Gretton, Karsten M. Borgwardt, and Bernhard Sch¨olkopf. 2007. Correcting sample selection bias by unlabeled data. In Proceedings of NIPS, pages 601–608. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proceedings of ACL. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of SIGKDD, pages 133–142. Maurice Kendall. 1970. Rank Correlation Methods. Griffin. Tie-Yan Liu. 2009. Learning to rank for information retrieval. Foundations and Trends in Information Retrieval, 3(3):225–331. Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, and Qiang Yang. 2009. Domain adaptation via transfer component analysis. In Proceedings of IJCAI, pages 1187–1192. John C. Platt and John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances in Large Margin Classifiers, pages 61–74. MIT Press. Tao Qin, Tie-Yan Liu, Jun Xu, and Hang Li. 2010. Letor: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval, 13(4):346–374. S. Shalev-Shwartz, Y. Singer, and N. Srebro. 2007. Pegasos: Primal estimated sub-gradient solver for svm. In Proceedings of the 24th International Conference on Machine Learning, pages 807–814. Hidetoshi Shimodaira. 2000. Improving predictive inference under covariate shift by weighting the loglikelihood function. Journal of Statistical Planning and Inference, 90:227–244. Masashi Sugiyama, Shinichi Nakajima, Hisashi Kashima, Paul von B¨unau, and Motoaki Kawanabe. 2008. Direct importance estimation with model selection and its application to covariate shift adaptation. In Proceedings of NIPS, pages 1433–1440. Ellen M. Voorhees. 2003. Overview of trec 2003. In Proceedings of TREC-2003, pages 1–13. Ellen M. Voorhees. 2004. Overview of trec 2004. In Proceedings of TREC-2004, pages 1–12. Bo Wang, Jie Tang, Wei Fan, Songcan Chen, Zi Yang, and Yanzhu Liu. 2009. Heterogeneous cross domain ranking in latent space. In Proceedings of CIKM. 121 Y. Yue, T. Finley, F. Radlinski, and T. Joachims. 2007. A support vector method for optimizing average precision. In Proceedings of SIGIR, pages 271–278. Bianca Zadrozny Zadrozny. 2004. Learning and evaluating classifiers under sample selection bias. In Proceedings of ICML, pages 325–332. 122
2011
12
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1200–1209, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Fine-Grained Class Label Markup of Search Queries Joseph Reisinger∗ Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712 [email protected] Marius Pas¸ca Google Inc. 1600 Amphitheatre Parkway Mountain View, California 94043 [email protected] Abstract We develop a novel approach to the semantic analysis of short text segments and demonstrate its utility on a large corpus of Web search queries. Extracting meaning from short text segments is difficult as there is little semantic redundancy between terms; hence methods based on shallow semantic analysis may fail to accurately estimate meaning. Furthermore search queries lack explicit syntax often used to determine intent in question answering. In this paper we propose a hybrid model of semantic analysis combining explicit class-label extraction with a latent class PCFG. This class-label correlation (CLC) model admits a robust parallel approximation, allowing it to scale to large amounts of query data. We demonstrate its performance in terms of (1) its predicted label accuracy on polysemous queries and (2) its ability to accurately chunk queries into base constituents. 1 Introduction Search queries are generally short and rarely contain much explicit syntax, making query understanding a purely semantic endeavor. Furthermore, as in nounphrase understanding, shallow lexical semantics is often irrelevant or misleading; e.g., the query [tropical breeze cleaners] has little to do with island vacations, nor are desert birds relevant to [1970 road runner], which refers to a car model. This paper introduces class-label correlation (CLC), a novel unsupervised approach to extract∗Contributions made during an internship at Google. ing shallow semantic content that combines classbased semantic markup (e.g., road runner is a car model) with a latent variable model for capturing weakly compositional interactions between query constituents. Constituents are tagged with IsA class labels from a large, automatically extracted lexicon, using a probabilistic context free grammar (PCFG). Correlations between the resulting label→term distributions are captured using a set of latent production rules specified by a hierarchical Dirichlet Process (Teh et al., 2006) with latent data groupings. Concretely, the IsA tags capture the inventory of potential meanings (e.g., jaguar can be labeled as european car or large cat) and relevant constituent spans, while the latent variable model performs sense and theme disambiguation (e.g., [jaguar habitat] would lend evidence for the large cat label). In addition to broad sense disambiguation, CLC can distinguish closely related usages, e.g., the use of dell in [dell motherboard replacement] and [dell stock price].1 Furthermore, by employing IsA class labeling as a preliminary step, CLC can account for common non-compositional phrases, such as big apple unlike systems relying purely on lexical semantics. Additional examples can be found later, in Figure 5. In addition to improving query understanding, potential applications of CLC include: (1) relation extraction (Baeza-Yates and Tiberi, 2007), (2) query substitutions or broad matching (Jones et al., 2006), and (3) classifying other short textual fragments such as SMS messages or tweets. We implement a parallel inference procedure for 1Dell the computer system vs. Dell the technology company. 1200 CLC and evaluate it on a sample of 500M search queries along two dimensions: (1) query constituent chunking precision (i.e., how accurate are the inferred spans breaks; cf., Bergsma and Wang (2007); Tan and Peng (2008)), and (2) class label assignment precision (i.e., given the query intent, how relevant are the inferred class labels), paying particular attention to cases where queries contain ambiguous constituents. CLC compares favorably to several simpler submodels, with gains in performance stemming from coarse-graining related class labels and increasing the number of clusters used to capture between-label correlations. (Paper organization): Section 2 discusses relevant background, Section 3 introduces the CLC model, Section 4 describes the experimental setup employed, Section 5 details results, Section 6 introduces areas for future work and Section 7 concludes. 2 Background Query understanding has been studied extensively in previous literature. Li (2010) defines the semantic structure of noun-phrase queries as intent heads (attributes) coupled with some number of intent modifiers (attribute values), e.g., the query [alice in wonderland 2010 cast] is comprised of an intent head cast and two intent modifiers alice in wonderland and 2010. In this work we focus on semantic class markup of query constituents, but our approach could be easily extended to account for query structure as well. Popescu et al. (2010) describe a similar classlabel-based approach for query interpretation, explicitly modeling the importance of each label for a given entity. However, details of their implementation were not publicly available, as of publication of this paper. For simplicity, we extract class labels using the seed-based approach proposed by Van Durme and Pas¸ca (2008) (in particular Pas¸ca (2010)) which generalizes Hearst (1992). Talukdar and Pereira (2010) use graph-based semi-supervised learning to acquire class-instance labels; Wang et al. (2009) introduce a similar CRF-based approach but only apply it to a small number of verticals (i.e., Computing and Electronics or Clothing and Shoes). Snow et al. (2006) describe a learning approach for automatically acquiring patterns indicative of hypernym (IsA) relations. Semantic class label lexicons derived from any of these approaches can be used as input to CLC. Several authors have studied query clustering in the context of information retrieval (e.g., Beeferman and Berger, 2000). Our approach is novel in this regard, as we cluster queries in order to capture correlations between span labels, rather than explicitly for query understanding. Tratz and Hovy (2010) propose a taxonomy for classifying and interpreting noun-compounds, focusing specifically on the relationships holding between constituents. Our approach yields similar topical decompositions of noun-phrases in queries and is completely unsupervised. Jones et al. (2006) propose an automatic method for query substitution, i.e., replacing a given query with another query with the similar meaning, overcoming issues with poor paraphrase coverage in tail queries. Correlations mined by our approach are readily useful for downstream query substitution. Bergsma and Wang (2007) develop a supervised approach to query chunking using 500 handsegmented queries from the AOL corpus. Tan and Peng (2008) develop a generative model of query segmentation that makes use of a language model and concepts derived from Wikipedia article titles. CLC differs fundamentally in that it learns concept label markup in addition to segmentation and uses in-domain concepts derived from queries themselves. This work also differs from both of these studies significantly in scope, training on 500M queries instead of just 500. At the level of class-label markup, our model is related to Bayesian PCFGs (Liang et al., 2007; Johnson et al., 2007b), and is a particular realization of an Adaptor Grammar (Johnson et al., 2007a; Johnson, 2010). Szpektor et al. (2008) introduce a model of contextual preferences, generalizing the notion of selectional preference (cf. Ritter et al., 2010) to arbitrary terms, allowing for context-sensitive inference. Our approach differs in its use of class-instance labels for generalizing terms, a necessary step for dealing with the lack of syntactic information in queries. 1201 ΦC ΦL ΦL vinyl windows brighton seaside towns building materials query clusters label clusters label pcfg query constituents Figure 1: Overview of CLC markup generation for the query [brighton vinyl windows]. Arrows denote multinomial distributions. 3 Latent Class-Label Correlation Input to CLC consists of raw search queries and a partial grammar mapping class labels to query spans (e.g., building materials→vinyl windows). CLC infers two additional latent productions types on top of these class labels: (1) a potentially infinite set of label clusters φL lk coarse-graining the raw input label productions V , and (2) a finite set of query clusters φC ci specifying distributions over label clusters; see Figure 1 for an overview. Operationally, CLC is implemented as a Hierarchical Dirichlet Process (HDP; Teh et al., 2006) with latent groups coupled with a Probabilistic Context Free Grammar (PCFG) likelihood function (Figure 2). We motivate our use of an HDP latent class model instead of a full PCFG with binary productions by the fact that the space of possible binary rule combinations is prohibitively large (561K base labels; 314B binary rules). The next sections discuss the three main components of CLC: §3.1 the raw IsA class labels, §3.2 the PCFG likelihood, and §3.3 the HDP with latent groupings. 3.1 IsA Label Extraction IsA class labels (hypernyms) V are extracted from a large corpus of raw Web text using the method proposed by Van Durme and Pas¸ca (2008) and extended by Pas¸ca (2010). Manually specified patterns are used to extract a seed set of class labels and the resulting label lists are reranked using cluster purity measures. 561K labels for base noun phrases are collected. Table 1 shows an example set of class labels extracted for several common noun phrases. Similar repositories of IsA labels, extracted using other methods, are available for experimental purclass label→query span recreational facilities→jacuzzi rural areas→wales destinations→wales seaside towns→brighton building materials→vinyl windows consumer goods→european clothing Table 1: Example production rules collected using the semi-supervised approach of Van Durme and Pas¸ca (2008). poses (Talukdar and Pereira, 2010). In addition to extracted rules, the CLC grammar is augmented with a set of null rules, one per unigram, ensuring that every query has a valid parse. 3.2 Class-Label PCFG In addition to the observed class-label production rules, CLC incorporates two sets of latent production rules coupled via an HDP (Figure 1). Class label→query span productions extracted from raw text are clustered into a set of latent label production clusters L = {l1, . . . , l∞}. Each label production cluster lk defines a multinomial distribution over class labels V parametrized by φL lk. Conceptually, φL lk captures a set of class labels with similar productions that are found in similar queries, for example the class labels states, northeast states, u.s. states, state areas, eastern states, and certain states might be included in the same coarse-grained cluster due to similarities in their productions. Each query q ∈Q is assigned to a latent query cluster cq ∈C{c1, . . . , c∞}, which defines a distribution over label production clusters L, denoted φC cq. Query clusters capture broad correlations between label production clusters and are necessary for performing sense disambiguation and capturing selectional preference. Query clusters and label production clusters are linked using a single HDP, allowing the number of label clusters to vary over the course of Gibbs sampling, based on the variance of the underlying data (Section 3.3). Viewed as a grammar, CLC only contains unary rules mapping labels to query spans; production correlations are captured directly by the query cluster, unlike in HDP-PCFG (Liang et al., 2007), as branching parses over the en1202 Indices Cardinality HDP base measure β ∼GEM(γ) |L| →∞ Query cluster φC i ∼DP(αC, β) i ∈|C| |L| →∞ Label cluster φL k ∼Dirichlet(αL) k ∈|L| |V | Query cluster ind πq ∼Dirichlet(ξ) q ∈|Q| |C| cq ∼πq q ∈|Q| 1 Label cluster ind zq,t ∼φC cq t ∈q, q ∈|Q| 1 Label ind lq,t ∼φL zq,t t ∈q, q ∈|Q| 1 c z π q t l !L ∞ β ξ α label clusters !C |C| α0 query clusters γ Figure 2: Generative process and graphical model for CLC. The top section of the model is the standard HDP prior; the middle section is the additional machinery necessary for modeling latent groupings and the bottom section contains the indicators for the latent class model. PCFG likelihood is not shown. tire label sparse are intractably large. Given a query q, a query cluster assignment cq and a set of label production clusters L, we define a parse of q to be a sequence of productions tq forming a parse tree consuming all the tokens in q. As with Bayesian PCFGs (Johnson, 2010), the probability of a tree tq is the product of the probabilities of the production rules used to construct it P(tq|φL, φC, cq) = Y r∈Rq P(r|φL lr)P(lr|φC cq) where Rq is the set of production rules used to derive tq, P(r|φL lr) is the probability of r given its label cluster assignment lr, and P(lr|φC cq) is the probability of label cluster lr in query cluster c. The probability of a query q is the sum of the probabilities of the parse trees that can generate it, P(q|φL, φC, cq) = X {t|y(t)=q} P(t|φL, φC, cq) where {t|y(t) = q} is the set of trees with q as their yield (i.e., generate the string of tokens in q). 3.3 Hierarchical Dirichlet Process with Latent Groups We complete the Bayesian generative specification of CLC with an HDP prior linking φC and φL. The HDP is a Bayesian generative model of shared structure for grouped data (Teh et al., 2006). A set of base clusters β ∼GEM(γ) is drawn from a Dirichlet Process with base measure γ using the stickbreaking construction, and clusters for each group k, γ – HDP-LG base-measure smoother; higher values lead to more uniform mass over label clusters. αC – Query cluster smoothing; higher values lead to more uniform mass over label clusters. αL – Label cluster smoothing; higher values lead to more label diversity within clusters. ξ – Query cluster assignment smoothing; higher values lead to more uniform assignment. Table 2: CLC-HDP-LG hyperparameters. φC k ∼DP(β), are drawn from a separate Dirichlet Process with base measure β, defined over the space of label clusters. Data in each group k are conditionally independent given β. Intuitively, β defines a common “menu” of label clusters, and each query cluster φC k defines a separate distribution over the label clusters. In order to account for variable query-cluster assignment, we extend the HDP model with latent groupings πq ∼Dir(ξ) for each query. The resulting Hierarchical Dirichlet Process with Latent Groups (HDP-LG) can be used to define a set of query clusters over a set of (potentially infinite) base label clusters (Figure 2). Each query cluster φC (latent group) assigns weight to different subsets of the available label clusters φL, capturing correlations between them at the query level. Each query q maintains a distribution over query clusters πq, capturing its affinity for each latent group. The full generative specification of CLC is shown in Figure 2; hyperparameters are shown in Table 2. In addition to the full joint CLC model, we evalu1203 ate several simpler models: 1. CLC-BASE – no query clusters, one label per label cluster. 2. CLC-DPMM – no query clusters, DPMM(αC) distribution over labels. 3. CLC-HDP-LG – full HDP-LG model with |C| query clusters over a potentially infinite number of query clusters. as well as various hyperparameter settings. 3.4 Parallel Approximate Gibbs Sampler We perform inference in CLC via Gibbs sampling, leveraging Multinomial-Dirichlet conjugacy to integrate out π, φC and φL (Teh et al., 2006; Johnson et al., 2007b). The remaining indicator variables c, z and l are sampled iteratively, conditional on all other variable assignments. Although there are an exponential number of parse trees for a given query, this space can be sampled efficiently using dynamic programming (Finkel et al., 2006; Johnson et al., 2007b) In order to apply CLC to Web-scale data, we implement an efficient parallel approximate Gibbs sampler in the MapReduce framework Dean and Ghemawat (2004). Each Gibbs iteration consists of a single MapReduce step for sampling, followed by an additional MapReduce step for computing marginal counts. 2 Relevant assignments c, z and l are stored locally with each query and are distributed across compute nodes. Each node is responsible only for resampling assignments for its local set of queries. Marginals are fetched opportunistically from a separate distributed hash server as they are needed by the sampler. Each Map step computes a single Gibbs step for 10% of the available data, using the marginals computed at the previous step. By resampling only 10% of the available data each iteration, we minimize the potentially negative effects of using the previous step’s marginal distribution. 4 Experimental Setup 4.1 Query Corpus Our dataset consists of a sample of 450M English queries submitted by anonymous Web users to 2This approximation and architecture is similar to Smola and Narayanamurthy (2010). Query length density 0.1 0.2 0.3 0.4 2 4 6 8 10 12 Figure 3: Distribution in the query corpus, broken down by query length (red/solid=all queries; blue/dashed=queries with ambiguous spans); most queries contain between 2-6 tokens. Google. The queries have an average of 3.81 tokens per query (1.7B tokens). Single token queries are removed as the model is incapable of using context to disambiguate their meaning. Figure 3 shows the distribution of remaining queries. During training, we include 10 copies of each query (4.5B queries total), allowing an estimate of the Bayes average posterior from a single Gibbs sample. 4.2 Evaluations Query markup is evaluated for phrase-chunking precision (Section 5.1) and label precision (Section 5.2) by human raters across two different samples: (1) an unbiased sample from the original corpus, and (2) a biased sample of queries containing ambiguous spans. Two raters scored a total of 10K labels from 800 spans across 300 queries. Span labels were marked as incorrect (0.0), badspan (0.0), ambiguous (0.5), or correct (1.0), with numeric scores for label precision as indicated. Chunking precision is measured as the percentage of labels not marked as badspan. We report two sets of precision scores depending on how null labels are handled: Strict evaluation treats null-labeled spans as incorrect, while Normal evaluation removes null-labeled spans from the precision calculation. Normal evaluation was included since the simpler models (e.g., CLC-BASE) tend to produce a significantly higher number of null assignments. Model evaluations were broken down into maximum a posteriori (MAP) and Bayes average estimates. MAP estimates are calculated as the single most likely label/cluster assignment across all query copies; all assignments in the sample are averaged 1204 % cluster moves 0.0 0.2 0.4 0.6 0.8 50 100 150 200 250 % label moves 0.25 0.30 0.35 0.40 0.45 0.50 50 100 150 200 250 Gibbs iterations % null rules 0.040 0.045 0.050 0.055 0.060 0.065 0.070 50 100 150 200 250 Figure 4: Convergence rates of CLCBASE (red/solid), CLC-HDP-LG 100C,40L (green/dashed), CLC-HDP-LG 1000C,40L (blue/dotted) in terms of % of query cluster swaps, label cluster swaps and null rule assignments. to obtain the Bayes average precision estimate.3 5 Results A total of five variants of CLC were evaluated with different combinations of |C| and HDP prior concentration αC (controlling the effective number of label clusters). Referring to models in terms of their parametrizations is potentially confusing. Therefore, we will make use of the fact that models with αC = 1 yielded roughly 40 label clusters on average, and models with αC = 0.1 yielded roughly 200 label clusters, naming model variants simply by the number of query and label clusters: (1) CLC-BASE, (2) CLC-DPMM 1C-40L, (3) CLC-HDP-LG 100C40L, (4) CLC-HDP-LG 1000C-40L, and (5) CLCHDP-LG 1000C-200L. Figure 4 shows the model convergence for CLC-BASE, CLC-HDP-LG 100C40L, and CLC-HDP-LG 1000C-40L. 3We calculate the Bayes average precision estimates at the top 10 (Bayes@10) and top 20 (Bayes@20) parse trees, weighted by probability. 5.1 Chunking Precision Chunking precision scores for each model are shown in Table 3 (average % of labels not marked badspan). CLC-HDP-LG 1000C-40L has the highest precision across both MAP and Bayes estimates (∼93% accuracy), followed by CLC-HDP-LG 1000C-200L (∼90% accuracy) and CLC-DPMM 1C40L (∼85%). CLC-BASE performed the worst by a significant margin (∼78%), indicating that label coarse-graining is more important than query clustering for chunking accuracy. No significant differences in label chunking accuracy were found between Bayes and MAP inference. 5.2 Predicting Span Labels The full CLC-HDP-LG model variants obtain higher label precision than the simpler models, with CLCHDP-LG 1000C-40L achieving the highest precision of the three (∼63% accuracy). Increasing the number of label clusters too high, however, significantly reduces precision: CLC-HDP-LG 1000C-200L obtains only ∼51% accuracy. However, comparing to CLC-DPMM 1C-40L and CLC-BASE demonstrates that the addition of label clusters and query clusters both lead to gains in label precision. These relative rankings are robust across strict and normal evaluation regimes. The breakdown over MAP and Bayes posterior estimation is less clear when considering label precision: the simpler models CLC-BASE and CLCDPMM 1C-40L perform significantly worse than Bayes when using MAP estimation, while in CLCHDP-LG the reverse holds. There is little evidence for correlation between precision and query length (weak, not statistically significant negative correlation using Spearman’s ρ). This result is interesting as the relative prevalence of natural language queries increases with query length, potentially degrading performance. However, we did find a strong positive correlation between precision and the number of labels productions applicable to a query, i.e., production rule fertility is a potential indicator of semantic quality. Finally, the histogram column in Table 3 shows the distribution of rater responses for each model. In general, the more precise models tend to have a significantly lower proportion of missing spans 1205 Model Chunking Label Precision Ambiguous Label Precision Spearman’s ρ Precision normal strict hist normal strict q. len # labels Class-Label Correlation Base Bayes@10 78.7±1.1 37.7±1.2 35.8±1.2 35.4±2.0 33.2±1.9 -0.13 0.51• Bayes@20 78.7±1.1 37.7±1.2 35.8±1.2 35.4±2.0 33.2±1.9 -0.13 0.51• MAP 76.3±2.2 33.3±2.2 31.8±2.2 36.2±4.0 33.2±3.8 -0.13 0.52• Class-Label Correlation DPMM 1C 40L Bayes@10 84.9±0.4 46.6±0.6 44.3±0.5 36.0±1.1 33.7±1.0 -0.05 0.25 Bayes@20 84.8±0.4 47.4±0.5 45.2±0.5 37.8±1.0 35.5±1.0 -0.02 0.23 MAP 84.1±0.8 42.6±1.0 40.5±0.9 11.2±1.3 10.6±1.3 -0.03 0.12 Class-Label Correlation HDP-LG 100C 40L Bayes@10 83.8±0.4 55.6±0.5 51.0±0.5 55.6±1.0 47.7±1.0 0.03 0.44• Bayes@20 83.6±0.4 56.9±0.5 52.3±0.5 57.4±1.0 49.8±0.9 0.04 0.41• MAP 82.7±0.5 58.5±0.5 53.6±0.5 60.4±1.1 51.5±1.0 0.02 0.41• Class-Label Correlation HDP-LG 1000C 40L Bayes@10 93.1±0.2 61.1±0.3 60.0±0.3 43.2±0.9 40.2±0.9 -0.06 0.26• Bayes@20 92.8±0.2 62.6±0.3 61.7±0.3 44.9±0.8 42.2±0.8 -0.10 0.27• MAP 92.7±0.2 63.7±0.3 62.7±0.3 44.1±0.9 41.1±0.9 -0.12 0.28• Class-Label Correlation HDP-LG 1000C 200L Bayes@10 90.3±0.5 50.9±0.8 48.6±0.7 45.8±1.5 42.5±1.3 -0.10 0.13 Bayes@20 89.9±0.5 50.2±0.7 48.0±0.7 44.4±1.4 41.3±1.3 -0.08 0.11 MAP 90.0±0.6 51.0±0.8 48.9±0.8 49.2±1.5 46.0±1.4 -0.07 0.04 Table 3: Chunking and label precision across five models. Confidence intervals are standard error; sparklines show distribution of precision scores (left is zero, right is one). Hist shows the distribution of human rating response (log y scale): green/first is correct, blue/second is ambiguous, cyan/third is missing and red/fourth is incorrect. Spearman’s ρ columns give label precision correlations with query length (weak negative correlation) and the number of applicable labels (weak to strong positive correlation); dots indicate significance. (blue/second bar; due to null rule assignment) in additional to more correct (green/first) and fewer incorrect (red/fourth) spans. 5.3 High Polysemy Subset We repeat the analysis of label precision on a subset of queries containing one of the manually-selected polysemous spans shown in Table 4. The CLCHDP-LG -based models still significantly outperform the simpler models, but unlike in the broader setting, CLC-HDP-LG 100C-40L significantly outperforms CLC-HDP-LG 1000C-40L, indicating that lower query cluster granularity helps address polysemy (Table 3). 5.4 Error Analysis Figure 5 gives examples of both high-precision and low-precision queries markups inferred by CLCHDP-LG. In general, CLC performs well on queries with clear intent head / intent modifier structure (Li, acapella, alamo, apple, atlas, bad, bank, batman, beloved, black forest, bravo, bush, canton, casino, champion, club, comet, concord, dallas, diamond, driver, english, ford, gamma, ion, lemon, manhattan, navy, pa, palm, port, put, resident evil, ronaldo, sacred heart, saturn, seven, solution, sopranos, sparta, supra, texas, village, wolf, young Table 4: Samples from a list of 90 manually selected ambiguous spans used to evaluate model performance under polysemy. 2010). More complex queries, such as [never know until you try quotes] or [how old do you have to be a bartender in new york] do not fit this model; however, expanding the set of extracted labels to also cover instances such as never know until you try would mitigate this problem, motivating the use of n-gram language models with semantic markup. A large number of mistakes made by CLC are 1206 Top 10% Bottom 20% Middle 20% Figure 5: Examples of high- and low-precision query markups inferred by CLC-HDP-LG. Black text is the original query; lines indicate potential spans; small text shows potential labels colored and numbered by label cluster; small bar shows percentage of assignments to that label cluster. due to named-entity categories with weak semantics such as rock bands or businesses (e.g., [tropical breeze cleaners], [cosmic railroad band] or [sopranos cigars]). When the named entity is common enough, it is detected by the rule set, but for the long tail of named entities this is not the case. One potential solution is to use a stronger notion of selectional preference and slot-filling, rather than just relying on correlation between labels. Other examples of common errors include interpreting weymouth in [weymouth train time table] as a town in Massachusetts instead of a town in the UK (lack of domain knowledge), and using lower quality semantic labels (e.g., neighboring countries for france, or great retailers for target). 6 Discussion and Future Work Adding both latent label clusters (DPMM) and latent query clusters (extending to HDP-LG) improve chunking and label precision over the baseline CLCBASE system. The label clusters are important because they capture intra-group correlations between class labels, while the query clusters are important for capturing inter-group correlations. However, the algorithm is sensitive to the relative number of clusters in each case: Too many labels/label clusters rel1207 ative to the number of query clusters make it difficult to learn correlations (O(n2) query clusters are required to capture pairwise interactions). Too many query clusters, on the other hand, make the model intractable computationally. The HDP automates selecting the number of clusters, but still requires manual hyperparameter setting. (Future Work) Many query slots have weak semantics and hence are misleading for CLC. For example [pacific breeze cleaners] or [dale hartley subaru] should be parsed such that the type of the leading slot is determined not by its direct content, but by its context; seeing subaru or cleaners after a noun-phrase slot is a strong indicator of its type (dealership or shop name). The current CLC model only couples these slots through their correlations in query clusters, not directly through relative position or context. Binary productions in the PCFG or a discriminative learning model would help address this. Finally, we did not measure label coverage with respect to a human evaluation set; coverage is useful as it indicates whether our inferred semantics are biased with respect to human norms. 7 Conclusions We introduced CLC, a set of latent variable PCFG models for semantic analysis of short textual segments. CLC captures semantic information in the form of interactions between clusters of automatically extracted class-labels, e.g., finding that placenames commonly co-occur with business-names. We applied CLC to a corpus containing 500M search queries, demonstrating its scalability and straightforward parallel implementation using frameworks like MapReduce or Hadoop. CLC was able to chunk queries into spans more accurately and infer more precise labels than several sub-models even across a highly ambiguous query subset. The key to obtaining these results was coarse-graining the input classlabel set and using a latent variable model to capture interactions between coarse-grained labels. References R. Baeza-Yates and A. Tiberi. 2007. Extracting semantic relations from query logs. In Proceedings of the 13th ACM Conference on Knowledge Discovery and Data Mining (KDD-07), pages 76–85. San Jose, California. D. Beeferman and A. Berger. 2000. Agglomerative clustering of a search engine query log. In Proceedings of the 6th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD-00), pages 407–416. S. Bergsma and Q. Wang. 2007. Learning noun phrase query segmentation. In Proceedings of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP-07), pages 819–826. Prague, Czech Republic. J. Dean and S. Ghemawat. 2004. MapReduce: Simplified data processing on large clusters. In Proceedings of the 6th Symposium on Operating Systems Design and Implementation (OSDI-04), pages 137–150. San Francisco, California. J. Finkel, C. Manning, and A. Ng. 2006. Solving the problem of cascading errors: Approximate Bayesian inference for linguistic annotation pipelines. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP-06), pages 618–626. Sydney, Australia. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th International Conference on Computational Linguistics (COLING-92), pages 539–545. Nantes, France. M. Johnson. 2010. PCFGs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 1148–1157. Uppsala, Sweden. M. Johnson, T. Griffiths, and S. Goldwater. 2007a. Adaptor grammars: a framework for specifying compositional nonparametric bayesian models. In Advances in Neural Information Processing Systems 19, pages 641–648. Vancouver, Canada. M. Johnson, T. Griffiths, and S. Goldwater. 2007b. Bayesian inference for PCFGs via Markov Chain Monte Carlo. In Proceedings of the 2007 Conference of the North American Association for Computational Linguistics (NAACL-HLT-07), pages 139–146. Rochester, New York. R. Jones, B. Rey, O. Madani, and W. Greiner. 2006. Generating query substitutions. In Proceedings of the 15h World Wide Web Conference (WWW-06), pages 387– 396. Edinburgh, Scotland. X. Li. 2010. Understanding the semantic structure of noun phrase queries. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 1337–1345. Uppsala, Sweden. 1208 P. Liang, S. Petrov, M. Jordan, and D. Klein. 2007. The infinite PCFG using hierarchical Dirichlet processes. In Proceedings of the 2007 Conference on Empirical Methods in Natural Language Processing (EMNLP07), pages 688–697. Prague, Czech Republic. M. Pas¸ca. 2010. The role of queries in ranking labeled instances extracted from text. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING-10), pages 955–962. Beijing, China. A. Popescu, P. Pantel, and G. Mishne. 2010. Semantic lexicon adaptation for use in query interpretation. In Proceedings of the 19th World Wide Web Conference (WWW-10), pages 1167–1168. Raleigh, North Carolina. A. Ritter, Mausam, and O. Etzioni. 2010. A latent Dirichlet allocation method for selectional preferences. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 424–434. Uppsala, Sweden. A. Smola and S. Narayanamurthy. 2010. An architecture for parallel topic models. In Proceedings of the 36th Conference on Very Large Data Bases (VLDB10), pages 703–710. singapore. R. Snow, D. Jurafsky, and A. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLINGACL-06), pages 801–808. Sydney, Australia. I. Szpektor, I. Dagan, R. Bar-Haim, and J. Goldberger. 2008. Contextual preferences. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL-08), pages 683–691. Columbus, Ohio. P. Talukdar and F. Pereira. 2010. Experiments in graphbased semi-supervised learning methods for classinstance acquisition. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 1473–1481. Uppsala, Sweden. B. Tan and F. Peng. 2008. Unsupervised query segmentation using generative language models and Wikipedia. In Proceedings of the 17th World Wide Web Conference (WWW-08), pages 347–356. Beijing, China. Y. Teh, M. Jordan, M. Beal, and D. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. S. Tratz and E. Hovy. 2010. A taxonomy, dataset, and classifier for automatic noun compound interpretation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10), pages 678–687. Uppsala, Sweden. B. Van Durme and M. Pas¸ca. 2008. Finding cars, goddesses and enzymes: Parametrizable acquisition of labeled instances for open-domain information extraction. In Proceedings of the 23rd National Conference on Artificial Intelligence (AAAI-08), pages 1243– 1248. Chicago, Illinois. T. Wang, R. Hoffmann, X. Li, and J. Szymanski. 2009. Semi-supervised learning of semantic classes for query understanding: from the Web and for the Web. In Proceedings of the 18th International Conference on Information and Knowledge Management (CIKM-09), pages 37–46. Hong Kong, China. 1209
2011
120
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1210–1219, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Creating a manually error-tagged and shallow-parsed learner corpus Ryo Nagata Konan University 8-9-1 Okamoto, Kobe 658-0072 Japan rnagata @ konan-u.ac.jp. Edward Whittaker Vera Sheinman The Japan Institute for Educational Measurement Inc. 3-2-4 Kita-Aoyama, Tokyo, 107-0061 Japan whittaker,sheinman  @jiem.co.jp Abstract The availability of learner corpora, especially those which have been manually error-tagged or shallow-parsed, is still limited. This means that researchers do not have a common development and test set for natural language processing of learner English such as for grammatical error detection. Given this background, we created a novel learner corpus that was manually error-tagged and shallowparsed. This corpus is available for research and educational purposes on the web. In this paper, we describe it in detail together with its data-collection method and annotation schemes. Another contribution of this paper is that we take the first step toward evaluating the performance of existing POStagging/chunking techniques on learner corpora using the created corpus. These contributions will facilitate further research in related areas such as grammatical error detection and automated essay scoring. 1 Introduction The availability of learner corpora is still somewhat limited despite the obvious usefulness of such data in conducting research on natural language processing of learner English in recent years. In particular, learner corpora tagged with grammatical errors are rare because of the difficulties inherent in learner corpus creation as will be described in Sect. 2. As shown in Table 1, error-tagged learner corpora are very few among existing learner corpora (see Leacock et al. (2010) for a more detailed discussion of learner corpora). Even if data is error-tagged, it is often not available to the public or its access is severely restricted. For example, the Cambridge Learner Corpus, which is one of the largest errortagged learner corpora, can only be used by authors and writers working for Cambridge University Press and by members of staff at Cambridge ESOL. Error-tagged learner corpora are crucial for developing and evaluating error detection/correction algorithms such as those described in (Rozovskaya and Roth, 2010b; Chodorow and Leacock, 2000; Chodorow et al., 2007; Felice and Pulman, 2008; Han et al., 2004; Han et al., 2006; Izumi et al., 2003b; Lee and Seneff, 2008; Nagata et al., 2004; Nagata et al., 2005; Nagata et al., 2006; Tetreault et al., 2010b). This is one of the most active research areas in natural language processing of learner English. Because of the restrictions on their availability, researchers have used their own learner corpora to develop and evaluate error detection/correction methods, which are often not commonly available to other researchers. This means that the detection/correction performance of each existing method is not directly comparable as Rozovskaya and Roth (2010a) and Tetreault et al. (2010a) point out. In other words, we are not sure which methods achieve the best performance. Commonly available errortagged learner corpora are therefore essential to further research in this area. For similar reasons, to the best of our knowledge, there exists no such learner corpus that is manually shallow-parsed and which is also publicly available, unlike, say, native-speaker corpora such as the Penn Treebank. Such a comparison brings up another crucial question: “Do existing POS taggers and chun1210 Name Error-tagged Parsed Size (words) Availability Cambridge Learner Corpus Yes No 30 million No CLEC Corpus Yes No 1 million Partially ETLC Corpus Partially No 2 million Not Known HKUST Corpus Yes No 30 million No ICLE Corpus (Granger et al., 2009) No No 3.7 million+ Yes JEFLL Corpus (Tono, 2000) No No 1 million Partially Longman Learners’ Corpus No No 10 million Not Known NICT JLE Corpus (Izumi et al., 2003a) Partially No 2 million Partially Polish Learner English Corpus No No 0.5 million No Janus Pannoius University Learner Corpus No No 0.4 million Not Known In Availability, Yes denotes that the full texts of the corpus is available to the public. Partially denotes that it is accessible through specially-made interfaces such as a concordancer. The information in this table may not be consistent because many of the URLs of the corpora give only sparse information about them. Table 1: Learner corpus list. kers work on learner English as well as on edited text such as newspaper articles?” Nobody really knows the answer to the question. The only exception in the literature is the work by Tetreault et al. (2010b) who evaluated parsing performance in relation to prepositions. Nevertheless, a great number of researchers have used existing POS taggers and chunkers to analyze the writing of learners of English. For instance, error detection methods normally use a POS tagger and/or a chunker in the error detection process. It is therefore possible that a major cause of false positives and negatives in error detection may be attributed to errors in POS-tagging and chunking. In corpus linguistics, researchers (Aarts and Granger, 1998; Granger, 1998; Tono, 2000) use such tools to extract interesting patterns from learner corpora and to reveal learners’ tendencies. However, poor performance of the tools may result in misleading conclusions. Given this background, we describe in this paper a manually error-tagged and shallow-parsed learner corpus that we created. In Sect. 2, we discuss the difficulties inherent in learner corpus creation. Considering the difficulties, in Sect. 3, we describe our method for learner corpus creation, including its data collection method and annotation schemes. In Sect. 4, we describe our learner corpus in detail. The learner corpus is called the Konan-JIEM learner corpus (KJ corpus) and is freely available for research and educational purposes on the web1. Another contribution of this paper is that we take the first step toward answering the question about the performance of existing POS-tagging/chunking techniques on learner data. We report and discuss the results in Sect. 5. 2 Difficulties in Learner Corpus Creation In addition to the common difficulties in creating any corpus, learner corpus creation has its own difficulties. We classify them into the following four categories of the difficulty in: 1. collecting texts written by learners; 2. transforming collected texts into a corpus; 3. copyright transfer; and 4. error and POS/parsing annotation. The first difficulty concerns the problem in collecting texts written by learners. As in the case of other corpora, it is preferable that the size of a learner corpus be as large as possible where the size can be measured in several ways including the total number of texts, words, sentences, writers, topics, and texts per writer. However, it is much more difficult to create a large learner corpus than to create a 1http://www.gsk.or.jp/index_e.html 1211 large native-speaker corpus. In the case of nativespeaker corpora, published texts such as newspaper articles or novels can be used as a corpus. By contrast, in the case of learner corpora, we must find learners and then let them write since there are no such published texts written by learners of English (unless they are part of a learner corpus). Here, it should be emphasized that learners often do not spontaneously write but are typically obliged to write, for example, in class, or during an exam. Because of this, learners may soon become tired of writing. This in itself can affect learner corpus creation much more than one would expect especially when creating a longitudinal learner corpus. Thus, it is crucial to keep learners motivated and focused on the writing assignments. The second difficulty arises when the collected texts are transformed into a learner corpus. This involves several time-consuming and troublesome tasks. The texts must be archived in electronic form, which requires typing every single collected text since learners normally write on paper. Besides, each text must be archived and maintained with accompanying information such as who wrote what text when and on what topic. Optionally, a learner corpus could include other pieces of information such as proficiency, first language, and age. Once the texts have been electronically archived, it is relatively easy to maintain and access them. However, this is not the case when the texts are first collected. Thus, it is better to have an efficient method for managing such information as well as the texts themselves. The third difficulty concerning copyright is a daunting problem. The copyright for each text must be transferred to the corpus creator so that the learner corpus can be made available to the public. Consider the case when a number of learners participate in a learner corpus creation project and everyone has to sign a copyright transfer form. This issue becomes even more complicated when the writer does not actually have such a right to transfer copyright. For instance, under the Japanese law, those younger than 20 years of age do not have the right; instead their parents do. Thus, corpus creators have to ask learners’ parents to sign copyright transfer forms. This is often the case since the writers in learner corpus creation projects are normally junior high school, high school, or college students. The final difficulty is in error and POS/parsing annotation. For error annotation, several annotation schemes exist (for example, the NICT JLE scheme (Izumi et al., 2005)). While designing an annotation scheme is one issue, annotating errors is yet another. No matter how well an annotation scheme is designed, there will always be exceptions. Every time an exception appears, it becomes necessary to revise the annotation scheme. Another issue we have to remember is that there is a trade-off between the granularity of an annotation scheme and the level of the difficulty in error annotation. The more detailed an annotation scheme is, the more information it can contain and the more difficult identifying errors is, and vice versa. For POS/parsing annotation, there are also a number of annotation schemes including the Brown tag set, the Claws tag set, and the Penn Treebank tag set. However, none of them are designed to be used for learner corpora. In other words, a variety of linguistic phenomena occur in learner corpora which the existing annotation schemes do not cover. For instance, spelling errors often appear in texts written by learners of English as in sard year, which should be third year. Grammatical errors prevent us applying existing annotation schemes, too. For instance, there are at least three possibilities for POStagging the word sing in the sentence everyone sing together. using the Penn Treebank tag set: sing/VB, sing/VBP, or sing/VBZ. The following example is more complicated: I don’t success cooking. Normally, the word success is not used as a verb but as a noun. The instance, however, appears in a position where a verb appears. As a result, there are at least two possibilities for tagging: success/NN and success/VB. Errors in mechanics are also problematic as in Tonight,we and beautifulhouse (missing spaces)2. One solution is to split them to obtain the correct strings and then tag them with a normal scheme. However, this would remove the information that spaces were originally missing which we want to preserve. To handle these and other phenomena which are peculiar to learner corpora, we need to develop a novel annotation scheme. 2Note that the KJ corpus consists of typed essays. 1212 3 Method 3.1 How to Collect and Maintain Texts Written by Learners Our text-collection method is based on writing exercises. In the writing exercises, learners write essays on a blog system. This very simple idea of using a blog system naturally solves the problem of archiving texts in electronic form. In addition, the use of a blog system enables us to easily register and maintain accompanying information including who (user ID) writes when (uploaded time) and on what topic (title of blog item). Besides, once registered in the user profile, the optional pieces of information such as proficiency, first language, and age are also easy to maintain and access. To design the writing exercises, we consulted with several teachers of English and conducted preexperiments. Ten learners participated in the preexperiments and were assigned five essay topics on average. Based on the experimental results, we designed the procedure of the writing exercise as shown in Table 2. In the first step, learners are assigned an essay topic. In the second step, they are given time to prepare during which they think about what to write on the given topic before they start writing. We found that this enables the students to write more. In the third step, they actually write an essay on the blog system. After they have finished writing, they submit their essay to the blog system to be registered. The following steps were considered optional. We implemented an article error detection method (Nagata et al., 2006) in the blog system as a trial attempt to keep the learners motivated since learners are likely to become tired of doing the same exercise repeatedly. To reduce this, the blog system highlights where article errors exist after the essay has been submitted. The hope is that this might prompt the learners to write more accurately and to continue the exercises. In the pre-experiments, the detection did indeed seem to interest the learners and to provide them with additional motivation. Considering these results, we decided to include the fourth and fifth steps in the writing exercises when we created our learner corpus. At the same time, we should of course be aware that the use of error detection affects learners’ writing. For example, it may change the Step Min. 1. Learner is assigned an essay topic – 2. Learner prepares for writing 5 3. Learner writes an essay 35 4. System detects errors in the essay 5 5. Learner rewrites the essay 15 Table 2: Procedure of writing exercise. distribution of errors. Nagata and Nakatani (2010) reported the effects in detail. To solve the problem of copyright transfer, we took legal professional advice but were informed that, in Japan at least, the only way to be sure is to have a copyright transfer form signed every time. We considered having it signed on the blog system, but it soon turned out that this did not work since participating learners may still be too young to have the legal right to sign the transfer. It is left for our long-term future work to devise a better solution to this legal issue. 3.2 Annotation Scheme This subsection describes the error and POS/chunking annotation schemes. Note that errors and POS/chunking are annotated separately, meaning that there are two files for any given text. Due to space restrictions we limit ourselves to only summarizing our annotation schemes in this section. The full descriptions are available together with the annotated corpus on the web. 3.2.1 Error Annotation We based our error annotation scheme on that used in the NICT JLE corpus (Izumi et al., 2003a), whose detailed description is readily available, for example, in Izumi et al. (2005). In that annotation scheme and accordingly in ours, errors are tagged using an XML syntax; an error is annotated by tagging a word or phrase that contains it. For instance, a tense error is annotated as follows: I  v tns crr=“made”  make  /v tns  pies last year. where v tns denotes a tense error in a verb. It should be emphasized that the error tags contain the information on correction together with error annotation. For instance, crr=“made” in the above example denotes the correct form of the verb is made. For missing word errors, error tags are placed where 1213 a word or phrase is missing (e.g., My friends live  prp crr=“in”  /prp  these places.). As a pilot study, we applied the NICT JLE annotation scheme to a learner corpus to reveal what modifications we needed to make. The learner corpus consisted of 455 essays (39,716 words) written by junior high and high school students3. The following describes the major modifications deemed necessary as a result of the pilot study. The biggest difference between the NICT JLE corpus and our targeted corpus is that the former is spoken data and the latter is written data. This difference inevitably requires several modifications to the annotation scheme. In speech data, there are no errors in spelling and mechanics such as punctuation and capitalization. However, since such errors are not usually regarded as grammatical errors, we decided simply not to annotate them in our annotation schemes. Another major difference is fragment errors. Fragments that do not form a complete sentence often appear in the writing of learners (e.g., I have many books. Because I like reading.). In written language, fragments can be regarded as a grammatical error. To annotate fragment errors, we added a new tag  f  (e.g., I have many books.  f  Because I like reading.  /f  ). As discussed in Sect. 2, there is a trade-off between the granularity of an annotation scheme and the level of the difficulty in annotating errors. In our annotation scheme, we narrowed down the number of tags to 22 from 46 in the original NICT JLE tag set to facilitate the annotation; the 22 tags are shown in Appendix A. The removed tags are merged into the tag for other. For instance, there are only three tags for errors in nouns (number, lexis, and other) in our tag set whereas there are six in the NICT JLE corpus (inflection, number, case, countability, complement, and lexis); the other tag (  n o  ) covers the four removed tags. 3.2.2 POS/Chunking Annotation We selected the Penn Treebank tag set, which is one of the most widely used tag sets, for our 3The learner corpus had been created before this reported work started. Learners wrote their essays on paper. Unfortunately, this learner corpus cannot be made available to the public since the copyrights were not transferred to us. POS/chunking annotation scheme. Similar to the error annotation scheme, we conducted a pilot study to determine what modifications we needed to make to the Penn Treebank scheme. In the pilot study, we used the same learner corpus as in the pilot study for the error annotation scheme. As a result of the pilot study, we found that the Penn Treebank tag set sufficed in most cases except for errors which learners made. Considering this, we determined a basic rule as follows: “Use the Penn Treebank tag set and preserve the original texts as much as possible.” To handle such errors, we made several modifications and added two new POS tags (CE and UK) and another two for chunking (XP and PH), which are described below. A major modification concerns errors in mechanics such as Tonight,we and beautifulhouse as already explained in Sect. 2. We use the symbol “-” to annotate such cases. For instance, the above two examples are annotated as follows: Tonight,we/NN,-PRP and beautifulhouse/JJ-NN. Note that each POS tag is hyphenated. It can also be used for annotating chunks in the same manner. For instance, Tonight,we is annotated as [NP-PH-NP Tonight,we/NN-,-PRP ]. Here, the tag PH stands for  chunk label and denotes tokens which are not normally chunked (cf., [NP Tonight/NN ] ,/, [NP we/PRP ]). Another major modification was required to handle grammatical errors. Essentially, POS/chunking tags are assigned according to the surface information of the word in question regardless of the existence of any errors. For example, There is apples. is annotated as [NP There/EX ] [VP is/VBZ ] [NP apples/NNS ] ./. Additionally, we define the CE4 tag to annotate errors in which learners use a word with a POS which is not allowed such as in I don’t success cooking. The CE tag encodes a POS which is obtained from the surface information together with the POS which would have been assigned to the word if it were not for the error. For instance, the above example is tagged as I don’t success/CE:NN:VB cooking. In this format, the second and third POSs are separated by “:” which denotes the POS which is obtained from the surface information and the POS which would be assigned 4CE stands for cognitive error. 1214 to the word without an error. The user can select either POS depending on his or her purposes. Note that the CE tag is compatible with the basic annotation scheme because we can retrieve the basic annotation by extracting only the second element (i.e., success/NN). If the tag is unknown because of grammatical errors or other phenomena, UK and XP5 are used for POS and chunking, respectively. For spelling errors, the corresponding POS and chunking tag are assigned to mistakenly spelled words if the correct forms can be guessed (e.g., [NP sird/JJ year/NN ]); otherwise UK and XP are used. 4 The Corpus We carried out a learner corpus creation project using the described method. Twenty six Japanese college students participated in the project. At the beginning, we had the students or their parents sign a conventional paper-based copyright transfer form. After that, they did the writing exercise described in Sect. 3 once or twice a week over three months. During that time, they were assigned ten topics, which were determined based on a writing textbook (Okihara, 1985). As described in Sect. 3, they used a blog system to write, submit, and rewrite their essays. Through out the exercises, they did not have access to the others’ essays and their own previous essays. As a result, 233 essays were collected; Table 3 shows the statistics on the collected essays. It turned out that the learners had no difficulties in using the blog system and seemed to focus on writing. Out of the 26 participants, 22 completed the 10 assignments while one student quit before the exercises started. We annotated the grammatical errors of all 233 essays. Two persons were involved in the annotation. After the annotation, another person checked the annotation results; differences in error annotaNumber of essays 233 Number of writers 25 Number of sentences 3,199 Number of words 25,537 Table 3: Statistics on the learner corpus. 5UK and XP stand for unknown and X phrase, respectively. tion were resolved by consulting the first two. The error annotation scheme was found to work well on them. The error-annotated essays can be used for evaluating error detection/correction methods. For POS/chunking annotation, we chose 170 essays out of 233. We annotated them using our POS/chunking scheme; hereafter, the 170 essays will be referred to as the shallow-parsed corpus. 5 Using the Corpus and Discussion 5.1 POS Tagging The 170 essays in the shallow-parsed corpus was used for evaluating existing POS-tagging techniques on texts written by learners. It consisted of 2,411 sentences and 22,452 tokens. HMM-based and CRF-based POS taggers were tested on the shallow-parsed corpus. The former was implemented using tri-grams by the author. It was trained on a corpus consisting of English learning materials (213,017 tokens). The latter was CRFTagger6, which was trained on the WSJ corpus. Both use the Penn Treebank POS tag set. The performance was evaluated using accuracy defined by number of tokens correctly POS-tagged number of tokens  (1) If the number of tokens in a sentence was different in the human annotation and the system output, the sentence was excluded from the calculation. This discrepancy sometimes occurred because the tokenization of the system sometimes differed from that of the human annotators. As a result, 19 and 126 sentences (215 and 1,352 tokens) were excluded from the evaluation in the HMM-based and CRF-based POS taggers, respectively. Table 4 shows the results. The second column corresponds to accuracies on a native-speaker corpus (sect. 00 of the WSJ corpus). The third column corresponds to accuracies on the learner corpus. As shown in Table 4, the CRF-based POS tagger suffers a decrease in accuracy as expected. Interestingly, the HMM-based POS tagger performed better on the learner corpus. This is perhaps because it 6“CRFTagger: CRF English POS Tagger,” Xuan-Hieu Phan, http://crftagger.sourceforge.net/, 2006. 1215 was trained on a corpus consisting of English learning materials whose distribution of vocabulary was expected to be relatively similar to that of the learner corpus. By contrast, it did not perform well on the native-speaker corpus because the size of the training corpus was relatively small and the distribution of vocabulary was not similar, and thus unknown words often appeared. This implies that selecting appropriate texts as a training corpus may improve the performance. Table 5 shows the top five POSs mistakenly tagged as other POSs. An obvious cause of mistakes in both taggers is that they inevitably make errors in the POSs that are not defined in the Penn Treebank tag set, that is, UK and CE. A closer look at the tagging results revealed that phenomena which were common to the writing of learners were major causes of other mistakes. Errors in capitalization partly explain why the taggers made so many mistakes in NN (singular nouns). They often identified erroneously capitalized common nouns as proper nouns as in This Summer/NNP Vacation/NNP. Spelling errors affected the taggers in the same way. Grammatical errors also caused confusion between POSs. For instance, omission of a certain word often caused confusion between a verb and an adjective as in I frightened/VBD. which should be I (was) frightened/JJ. Another interesting case is expressions that learners overuse (e.g., and/CC so/RB on/RB and so/JJ so/JJ). Such phrases are not erroneous but are relatively infrequent in nativespeaker corpora. Therefore, the taggers tended to identify their POSs according to the surface information on the tokens themselves when such phrases appeared in the learner corpus (e.g., and/CC so/RB on/IN and so/RB so/RB). We should be aware that tokenization is also problematic although failures in tokenization were excluded from the accuracies. The influence of the decrease in accuracy on other NLP tasks is expected to be task and/or method dependent. Methods that directly use or handle seMethod Native Corpus Learner Corpus CRF 0.970 0.932 HMM 0.887 0.926 Table 4: POS-tagging accuracy. HMM CRF POS Freq. POS Freq. NN 259 NN 215 VBP 247 RB 166 RB 163 CE 144 CE 150 JJ 140 JJ 108 FW 86 Table 5: Top five POSs mistakenly tagged. quences of POSs are likely to suffer from it. An example is the error detection method (Chodorow and Leacock, 2000), which identifies unnatural sequences of POSs as grammatical errors in the writing of learners. As just discussed above, existing techniques often fail in sequences of POSs that have a grammatical error. For instance, an existing POS tagger likely tags the sentence I frightened. as I/PRP frightened/VBD ./. as we have just seen, and in turn the error detection method cannot identify it as an error because the sequence PRP VBD is not unnatural; it would correctly detect it if the sentence were correctly tagged as I/PRP frightened/JJ ./. For the same reason, the decrease in accuracy may affect the methods (Aarts and Granger, 1998; Granger, 1998; Tono, 2000) for extracting interesting sequences of POSs from learner corpora; for example, BOS7 PRP JJ is an interesting sequence but is never extracted unless the phrase is correctly POS-tagged. It requires further investigation to reveal how much impact the decrease has on these methods. By contrast, error detection/correction methods based on the bagof-word features (or feature vectors) are expected to suffer less from it since mistakenly POS-tagged tokens are only one of the features. At the same time, we should notice that if the target errors are in the tokens that are mistakenly POS-tagged, the detection will likely fail (e.g., verbs should be correctly identified in tense error detection). In addition to the above evaluation, we attempted to improve the POS taggers using the transformation-based POS-tagging technique (Brill, 1994). In the technique, transformation rules are obtained by comparing the output of a POS tagger and the human annotation so that the differences between the two are reduced. We used the shallow7BOS denotes a beginning of a sentence. 1216 Method Original Improved CRF 0.932 0.934 HMM 0.926 0.933 Table 6: Improvement obtained by transformation. parsed corpus as a test corpus and the other manually POS-tagged corpus created in the pilot study described in Subsect. 3.2.1 as a training corpus. We used POS-based and word-based transformations as Brill (1994) described. Table 6 shows the improvements together with the original accuracies. Table 6 reveals that even the simple application of Brill’s technique achieves a slight improvement in both taggers. Designing the templates of the transformation for learner corpora may achieve further improvement. 5.2 Head Noun Identification In the evaluation of chunking, we focus on head noun identification. Head noun identification often plays an important role in error detection/correction. For example, it is crucial to identify head nouns to detect errors in article and number. We again used the shallow-parsed corpus as a test corpus. The essays contained 3,589 head nouns. We implemented an HMM-based chunker using 5grams whose input is a sequence of POSs, which was obtained by the HMM-based POS tagger described in the previous subsection. The chunker was trained on the same corpus as the HMM-based POS tagger. The performance was evaluated by recall and precision defined by number of head nouns correctly identified number of head nouns (2) and number of head nouns correctly identified number of tokens identified as head noun  (3) respectively. Table 7 shows the results. To our surprise, the chunker performed better than we had expected. A possible reason for this is that sentences written by learners of English tend to be shorter and simpler in terms of their structure. The results in Table 7 also enable us to quantitatively estimate expected improvement in error detection/correction which is achieved by improving chunking. To see this, let us define the following symbols:  : Recall of head noun identification, : recall of error detection without chunking error, recall of error detection with chunking error. and are interpreted as the true recall of error detection and its observed value when chunking error exists, respectively. Here, note that can be expressed as  . For instance, according to Han et al. (2006), their method achieves a recall of 0.40 (i.e.,    ), and thus    assuming that chunking errors exist and recall of head noun identification is     just as in this evaluation. Improving  to    would achieve    without any modification to the error detection method. Precision can also be estimated in a similar manner although it requires a more complicated calculation. 6 Conclusions In this paper, we discussed the difficulties inherent in learner corpus creation and a method for efficiently creating a learner corpus. We described the manually error-annotated and shallow-parsed learner corpus which was created using this method. We also showed its usefulness in developing and evaluating POS taggers and chunkers. We believe that publishing this corpus will give researchers a common development and test set for developing related NLP techniques including error detection/correction and POS-tagging/chunking, which will facilitate further research in these areas. A Error tag set This is the list of our error tag set. It is based on the NICT JLE tag set (Izumi et al., 2005).  n: noun – num: number – lxc: lexis – o: other  v: verb – agr: agreement Recall Precision 0.903 0.907 Table 7: Performance on head noun identification. 1217 – tns: tense – lxc: lexis – o: other  mo: auxiliary verb  aj: adjective – lxc: lexis – o: other  av: adverb  prp: preposition – lxc: lexis – o: other  at: article  pn: pronoun  con: conjunction  rel: relative clause  itr: interrogative  olxc: errors in lexis in more than two words  ord: word order  uk: unknown error  f: fragment error References Jan Aarts and Sylviane Granger. 1998. Tag sequences in learner corpora: a key to interlanguage grammar and discourse. Longman Pub Group, London. Eric Brill. 1994. Some advances in transformation-based part of speech tagging. In Proc. of 12th National Conference on Artificial Intelligence, pages 722–727. Martin Chodorow and Claudia Leacock. 2000. An unsupervised method for detecting grammatical errors. In Proc. of 1st Meeting of the North America Chapter of ACL, pages 140–147. Martin Chodorow, Joel R. Tetreault, and Na-Rae Han. 2007. Detection of grammatical errors involving prepositions. In Proc. of 4th ACL-SIGSEM Workshop on Prepositions, pages 25–30. Rachele De Felice and Stephen G. Pulman. 2008. A classifier-based approach to preposition and determiner error correction in L2 English. In Proc. of 22nd International Conference on Computational Linguistics, pages 169–176. Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. International Corpus of Learner English v2. Presses universitaires de Louvain. Sylviane Granger. 1998. Prefabricated patterns in advanced EFL writing: collocations and formulae. In A. P. Cowie, editor, Phraseology: theory, analysis, and application, pages 145–160. Clarendon Press. Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2004. Detecting errors in English article usage with a maximum entropy classifier trained on a large, diverse corpus. In Proc. of 4th International Conference on Language Resources and Evaluation, pages 1625– 1628. Na-Rae Han, Martin Chodorow, and Claudia Leacock. 2006. Detecting errors in English article usage by non-native speakers. Natural Language Engineering, 12(2):115–129. Emi Izumi, Toyomi Saiga, Thepchai Supnithi, Kiyotaka Uchimoto, and Hitoshi Isahara. 2003a. The development of the spoken corpus of Japanese learner English and the applications in collaboration with NLP techniques. In Proc. of the Corpus Linguistics 2003 Conference, pages 359–366. Emi Izumi, Kiyotaka Uchimoto, Toyomi Saiga, Thepchai Supnithi, and Hitoshi Isahara. 2003b. Automatic error detection in the Japanese learners’ English spoken data. In Proc. of 41st Annual Meeting of ACL, pages 145–148. Emi Izumi, Kiyotaka Uchimoto, and Hitoshi Isahara. 2005. Error annotation for corpus of Japanese learner English. In Proc. of 6th International Workshop on Linguistically Annotated Corpora, pages 71–80. Claudia Leacock, Martin Chodorow, Michael Gamon, and Joel Tetreault. 2010. Automated Grammatical Error Detection for Language Learners. Morgan & Claypool, San Rafael. John Lee and Stephanie Seneff. 2008. Correcting misuse of verb forms. In Proc. of 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technology Conference, pages 174– 182. Ryo Nagata and Kazuhide Nakatani. 2010. Evaluating performance of grammatical error detection to maximize learning effect. In Proc. of 23rd International Conference on Computational Linguistics, poster volume, pages 894–900. Ryo Nagata, Fumito Masui, Atsuo Kawai, and Naoki Isu. 2004. Recognizing article errors based on the three 1218 head words. In Proc. of Cognition and Exploratory Learning in Digital Age, pages 184–191. Ryo Nagata, Takahiro Wakana, Fumito Masui, Atsuo Kawai, and Naoki Isu. 2005. Detecting article errors based on the mass count distinction. In Proc. of 2nd International Joint Conference on Natural Language Processing, pages 815–826. Ryo Nagata, Atsuo Kawai, Koichiro Morihiro, and Naoki Isu. 2006. A feedback-augmented method for detecting errors in the writing of learners of English. In Proc. of 44th Annual Meeting of ACL, pages 241–248. Katsuaki Okihara. 1985. English writing (in Japanese). Taishukan, Tokyo. Alla Rozovskaya and Dan Roth. 2010a. Annotating ESL errors: Challenges and rewords. In Proc. of NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 28–36. Alla Rozovskaya and Dan Roth. 2010b. Training paradigms for correcting errors in grammar and usage. In Proc. of 2010 Annual Conference of the North American Chapter of the ACL, pages 154–162. Joel Tetreault, Elena Filatova, and Martin Chodorow. 2010a. Rethinking grammatical error annotation and evaluation with the Amazon Mechanical Turk. In Proc. of NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications, pages 45–48. Joel Tetreault, Jennifer Foster, and Martin Chodorow. 2010b. Using parse features for preposition selection and error detection. In Proc. of 48nd Annual Meeting of the Association for Computational Linguistics Short Papers, pages 353–358. Yukio Tono. 2000. A corpus-based analysis of interlanguage development: analysing POS tag sequences of EFL learner corpora. In Practical Applications in Language Corpora, pages 123–132. 1219
2011
121
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1220–1229, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Crowdsourcing Translation: Professional Quality from Non-Professionals Omar F. Zaidan and Chris Callison-Burch Dept. of Computer Science, Johns Hopkins University Baltimore, MD 21218, USA {ozaidan,ccb}@cs.jhu.edu Abstract Naively collecting translations by crowdsourcing the task to non-professional translators yields disfluent, low-quality results if no quality control is exercised. We demonstrate a variety of mechanisms that increase the translation quality to near professional levels. Specifically, we solicit redundant translations and edits to them, and automatically select the best output among them. We propose a set of features that model both the translations and the translators, such as country of residence, LM perplexity of the translation, edit rate from the other translations, and (optionally) calibration against professional translators. Using these features to score the collected translations, we are able to discriminate between acceptable and unacceptable translations. We recreate the NIST 2009 Urdu-toEnglish evaluation set with Mechanical Turk, and quantitatively show that our models are able to select translations within the range of quality that we expect from professional translators. The total cost is more than an order of magnitude lower than professional translation. 1 Introduction In natural language processing research, translations are most often used in statistical machine translation (SMT), where systems are trained using bilingual sentence-aligned parallel corpora. SMT owes its existence to data like the Canadian Hansards (which by law must be published in both French and English). SMT can be applied to any language pair for which there is sufficient data, and it has been shown to produce state-of-the-art results for language pairs like Arabic–English, where there is ample data. However, large bilingual parallel corpora exist for relatively few languages pairs. There are various options for creating new training resources for new language pairs. These include harvesting the web for translations or comparable corpora (Resnik and Smith, 2003; Munteanu and Marcu, 2005; Smith et al., 2010; Uszkoreit et al., 2010), improving SMT models so that they are better suited to the low resource setting (Al-Onaizan et al., 2002; Probst et al., 2002; Oard et al., 2003; Niessen and Ney, 2004), or designing models that are capable of learning translations from monolingual corpora (Rapp, 1995; Fung and Yee, 1998; Schafer and Yarowsky, 2002; Haghighi et al., 2008). Relatively little consideration is given to the idea of simply hiring translators to create parallel data, because it would seem to be prohibitively expensive. For example, Germann (2001) estimated the cost of hiring professional translators to create a TamilEnglish corpus at $0.36/word. At that rate, translating enough data to build even a small parallel corpus like the LDC’s 1.5 million word Urdu–English corpus would exceed half a million dollars. In this paper we examine the idea of creating low cost translations via crowdscouring. We use Amazon’s Mechanical Turk to hire a large group of nonprofessional translators, and have them recreate an Urdu–English evaluation set at a fraction of the cost of professional translators. The original dataset already has professionally-produced reference translations, which allows us to objectively and quantitatively compare the quality of professional and nonprofessional translations. Although many of the individual non-expert translators produce low-quality, disfluent translations, we show that it is possible to 1220 Signs of human livings have been found in many caves in Attapure. In 1994, the remains of pre-historic man, which are believed to be 800,000 years old were discovered and they were named `Home Antecessor' meaning `The Founding Man'. Prior to that 6 lac years old humans, named as Homogenisens in scientific terms,were believed to be the oldest dwellers of this area. Archaeological experts say that evidence is found that proves that the inhabitants of this area used molded tools. The ground where these digs took place has been claimed to be the oldest known European discovery of civilization, as announced by the French News Agency. !"#$"% &' ()*"+*, &-,./%, 0#1 234 5, 0#1 1994 67"89: ;2< &="> &*"1 &*,?@ A"B C'D 8 E"FG8?H= )> ‘I"+*, &*"%’ &JK8 ?+#B &LJ8, )1)< 0#MJ> 0#NO &' P"#O "8: Q"* "' &+J-"B 0#MJ> I"+*, 2*,?@ C'D 6 RG$ 2B 5, 5, ;2< "="> "M' S+J#>?GTU#< )1)< 0#1 VW3X, P2Y= 2="> 2*"1 &Z-"<9 [8?= \8.$ 2' 234 2+8, 0#M*, ]' 2< "JM' "' [8?<"1 2' ]^8.$ _9"`a 2' 234 5, ]' 2< "/bc ]/@ 2B [> 0#< 2b1 .<,)d P2Y= 2=?' A"^K/B, &Y% 9,ef, 2-)< 2#' &-Wgh i)T Signs of human life of ancient people have been discovered in several caves of Atapuerca. In 1994, several homo antecessor fossils i.e. pioneer human were uncovered in this region, which are supposed to be 800,000 years old. Previously, 600,000 years old ancestors, called homo hudlabar [sic] in scientific term, were supposed to be the most ancient inhabitants of the region.Archeologists are of the view that they have gathered evidence that the people of this region had also been using fabricated tools. On the basis of the level at which this excavation was carried out, the French news agency [AFP] has termed it the oldest European discovery. Urdu source Professional LDC Translation Non-Professional Mechanical Turk Translation Figure 1: A comparison of professional translations provided by the LDC to non-professional translations created on Mechanical Turk. get high quality translations in aggregate by soliciting multiple translations, redundantly editing them, and then selecting the best of the bunch. To select the best translation, we use a machinelearning-inspired approach that assigns a score to each translation we collect. The scores discriminate acceptable translations from those that are not (and competent translators from those who are not). The scoring is based on a set of informative, intuitive, and easy-to-compute features. These include country of residence, number of years speaking English, LM perplexity of the translation, edit rate from the other translations, and (optionally) calibration against professional translators, with the weights set using a small set of gold standard data from professional translators. 2 Crowdsourcing Translation to Non-Professionals To collect crowdsourced translations, we use Amazon’s Mechanical Turk (MTurk), an online marketplace designed to pay people small sums of money to complete Human Intelligence Tasks (or HITs) – tasks that are difficult for computers but easy for people. Example HITs range from labeling images to moderating blog comments to providing feedback on relevance of results for search queries. Anyone with an Amazon account can either submit HITs or work on HITs that were submitted by others. Workers are referred to as “Turkers”, and designers of HITs as “Requesters.” A Requester specifies the reward to be paid for each completed item, sometimes as low as $0.01. Turkers are free to select whichever HITs interest them, and to bypass HITs they find uninteresting or which they deem pay too little. The advantages of Mechanical Turk include: • zero overhead for hiring workers • a large, low-cost labor force • easy micropayment system • short turnaround time, as tasks get completed in parallel by many individuals • access to foreign markets with native speakers of many rare languages One downside is that Amazon does not provide any personal information about Turkers. (Each Turker is identifiable only through an anonymous ID like A23KO2TP7I4KK2.) In particular, no information is available about a worker’s educational background, skills, or even native language(s). This makes it difficult to determine if a Turker is qualified to complete a translation task. Therefore, soliciting translations from anonymous non-professionals carries a significant risk of poor translation quality. Whereas hiring a professional translator ensures a degree of quality and care, it is not very difficult to find bad translations provided by Turkers. One Urdu headline, professionally translated as Barack Obama: America Will Adopt a New Iran Strategy, was rendered disfluently by a Turker as Barak Obam will do a new policy with Iran. Another translated it with snarky sarcasm: Barak Obama and America weave new evil strategies against Iran. Figure 1 gives more typical translation examples. The translations often reflect non-native English, but are generally done conscientiously (in spite of the relatively small payment). To improve the accuracy of noisy labels from nonexperts, most existing quality control mechanisms 1221 employ some form of voting, assuming a discrete set of possible labels. This is not the case for translations, where the ‘labels’ are full sentences. When dealing with such a structured output, the space of possible outputs is diverse and complex. We therefore need a different approach for quality control. That is precisely the focus of this work: to propose, and evaluate, such quality control mechanisms. In the next section, we discuss reproducing the Urdu-to-English 2009 NIST evaluation set. We then describe a principled approach to discriminate good translations from bad ones, given a set of redundant translations for the same source sentence. 3 Datasets 3.1 The Urdu-to-English 2009 NIST Evaluation Set We translated the Urdu side of the Urdu–English test set of the 2009 NIST MT Evaluation Workshop. The set consists of 1,792 Urdu sentences from a variety of news and online sources. The set includes four different reference translations for each source sentence, produced by professional translation agencies. NIST contracted the LDC to oversee the translation process and perform quality control. This particular dataset, with its multiple reference translations, is very useful because we can measure the quality range for professional translators, which gives us an idea of whether or not the crowdsourced translations approach the quality of a professional translator. 3.2 Translation HIT design We solicited English translations for the Urdu sentences in the NIST dataset. Amazon has enabled payments in rupees, which has attracted a large demographic of workers from India (Ipeirotis, 2010). Although it does not yet have s direct payment in Pakistan’s local currency, we found that a large contingent of our workers are located in Pakistan. Our HIT involved showing the worker a sequence of Urdu sentences, and asking them to provide an English translation for each one. The screen also included a brief set of instructions, and a short questionnaire section. The reward was set at $0.10 per translation, or roughly $0.005 per word. In our first collection effort, we solicited only one translation per Urdu sentence. After confirming that the task is feasible due to the large pool of workers willing and able to provide translations, we carried out a second collection effort, this time soliciting three translations per Urdu sentence (from three distinct translators). The interface was also slightly modified, in the following ways: • Instead of asking Turkers to translate a full document (as in our first pass), we instead split the data set into groups of 10 sentences per HIT. • We converted the Urdu sentences into images so that Turkers could not cheat by copying-andpasting the Urdu text into an MT system. • We collected information about each worker’s geographic location, using a JavaScript plugin. The translations from the first pass were of noticeably low quality, most likely due to Turkers using automatic translation systems. That is why we used images instead of text in our second pass, which yielded significant improvements. That said, we do not discard the translations from the first pass, and we do include them in our experiments. 3.3 Post-editing and Ranking HITs In addition to collecting four translations per source sentence, we also collected post-edited versions of the translations, as well as ranking judgments about their quality. Figure 2 gives examples of the unedited translations that we collected in the translation pass. These typically contain many simple mistakes like misspellings, typos, and awkward word choice. We posted another MTurk task where we asked workers to edit the translations into more fluent and grammatical sentences. We restrict the task to US-based workers to increase the likelihood that they would be native English speakers. We also asked US-based Turkers to rank the translations. We presented the translations in groups of four, and the annotator’s task was to rank the sentences by fluency, from best to worst (allowing ties). We collected redundant annotations in these two tasks as well. Each translation is edited three times (by three distinct editors). We solicited only one edit per translation from our first pass translation effort. So, in total, we had 10 post-edited translations for 1222 Avoiding dieting to prevent from flu abstention from dieting in order to avoid Flu Abstain from decrease eating in order to escape from flue In order to be safer from flu quit dieting This research of American scientists came in front after experimenting on mice. This research from the American Scientists have come up after the experiments on rats. This research of American scientists was shown after many experiments on mouses. According to the American Scientist this research has come out after much experimentations on rats. Experiments proved that mice on a lower calorie diet had comparatively less ability to fight the flu virus. in has been proven from experiments that rats put on diet with less calories had less ability to resist the Flu virus. It was proved by experiments the low calories eaters mouses had low defending power for flue in ratio. Experimentaions have proved that those rats on less calories diet have developed a tendency of not overcoming the flu virus. research has proven this old myth wrong that its better to fast during fever. Research disproved the old axiom that " It is better to fast during fever" The research proved this old talk that decrease eating is useful in fever. This Research has proved the very old saying wrong that it is good to starve while in fever. Figure 2: We redundantly translate each source sentence by soliciting multiple translations from different Turkers. These translations are put through a subsequent editing set, where multiple edited versions are produced. We select the best translation from the set using features that predict the quality of each translation and each translator. each source sentence (plus the four original translations). In the ranking task, we collected judgments from five distinct workers for each translation group. 3.4 Data Collection Cost We paid a reward of $0.10 to translate a sentence, $0.25 to edit a set of ten sentences, and $0.06 to rank a set of four translation groups. Therefore, we had the following costs: • Translation cost: $716.80 • Editing cost: $447.50 • Ranking cost: $134.40 (If not done redundantly, those values would be $179.20, $44.75, and $26.88, respectively.) Adding Amazon’s 10% fee, this brings the grand total to under $1,500, spent to collect 7,000+ translations, 17,000+ edited translations, and 35,000+ rank labels.1 We also use about 10% of the existing professional references in most of our experiments (see 4.2 and 4.3). If we estimate the cost at $0.30/word, that would roughly be an additional $1,000. 3.5 MTurk Participation 52 different Turkers took part in the translation task, each translating 138 sentences on average. In the editing task, 320 Turkers participated, averaging 56 sentences each. In the ranking task, 245 Turkers participated, averaging 9.1 HITs each, or 146 rank labels (since each ranking HIT involved judging 16 translations, in groups of four). 1Data URL: www.cs.jhu.edu/˜ozaidan/RCLMT. 4 Quality Control Model Our approach to building a translation set from the available data is to select, for each Urdu sentence, the one translation that our model believes to be the best out of the available translations. We evaluate various selection techniques by comparing the selected Turker translations against existing professionally-produced translations. The more the selected translations resemble the professional translations, the higher the quality. 4.1 Features Used to Select Best Translations Our model selects one of the 14 English options generated by Turkers. For a source sentence si, our model assigns a score to each sentence in the set of available translations {ti,1, ...ti,14}. The chosen translation is the highest scoring translation: tr(si) = tri,j∗s.t. j∗= argmax j score(ti,j) (1) where score(.) is the dot product: score(ti,j) def= ⃗w · ⃗f(ti,j) (2) Here, ⃗w is the model’s weight vector (tuned as described below in 4.2), and ⃗f is a translation’s corresponding feature vector. Each feature is a function computed from the English sentence string, the Urdu sentence string, the workers (translators, editors, and rankers), and/or the rank labels. We use 21 features, categorized into the following three sets. 1223 Sentence-level (6 features). Most of the Turkers performing our task were native Urdu speakers whose second language was English, and they do not always produce natural-sounding English sentences. Therefore, the first set of features attempt to discriminate good English sentences from bad ones. • Language model features: each sentence is assigned a log probability and per-word perplexity score, using a 5-gram language model trained on the English Gigaword corpus. • Sentence length features: a good translation tends to be comparable in length to the source sentence, whereas an overly short or long translation is probably bad. We add two features that are the ratios of the two lengths (one penalizes short sentences and one penalizes long ones). • Web n-gram match percentage: we assign a score to each sentence based on the percentage of the n-grams (up to length 5) in the translation that exist in the Google N-Gram Database. • Web n-gram geometric average: we calculate the average over the different n-gram match percentages (similar to the way BLEU is computed). We add three features corresponding to max n-gram lengths of 3, 4, and 5. • Edit rate to other translations: a bad translation is likely not to be very similar to other translations, since there are many more ways a translation can be bad than for it to be good. So, we compute the average edit rate distance from the other translations (using the TER metric). Worker-level (12 features). We add worker-level features that evaluate a translation based on who provided it. • Aggregate features: for each sentence-level feature above, we have a corresponding feature computed over all of that worker’s translations. • Language abilities: we ask workers to provide information about their language abilities. We have a binary feature indicating whether Urdu is their native language, and a feature for how long they have spoken it. We add a pair of equivalent features for English. • Worker location: two binary features reflect a worker’s location, one to indicate if they are located in Pakistan, and one to indicate if they are located in India. Ranking (3 features). The third set of features is based on the ranking labels we collected (see 3.3). • Average rank: the average of the five rank labels provided for this translation. • Is-Best percentage: how often the translation was top-ranked among the four translations. • Is-Better percentage: how often the translation was judged as the better translation, over all pairwise comparisons extracted from the ranks. Other features (not investigated here) could include source-target information, such as translation model scores or the number of source words translated correctly according to a bilingual dictionary. 4.2 Parameter Tuning Once features are computed for the sentences, we must set the model’s weight vector ⃗w. Naturally, the weights should be chosen so that good translations get high scores, and bad translations get low scores. We optimize translation quality against a small subset (10%) of reference (professional) translations. To tune the weight vector, we use the linear search method of Och (2003), which is the basis of Minimum Error Rate Training (MERT). MERT is an iterative algorithm used to tune parameters of an MT system, which operates by iteratively generating new candidate translations and adjusting the weights to give good translations a high score, then regenerating new candidates based on the updated weights, etc. In our work, the set of candidate translations is fixed (the 14 English sentences for each source sentence), and therefore iterating the procedure is not applicable. We use the Z-MERT software package (Zaidan, 2009) to perform the search. 4.3 The Worker Calibration Feature Since we use a small portion of the reference translations to perform weight tuning, we can also use that data to compute another worker-specific feature. Namely, we can evaluate the competency of each worker by scoring their translations against the reference translations. We then use that feature for every translation given by that worker. The intuition 1224 is that workers known to produce good translations are likely to continue to produce good translations, and the opposite is likely true as well. 4.4 Evaluation Strategy To measure the quality of the translations, we make use of the existing professional translations. Since we have four professional translation sets, we can calculate the BLEU score (Papineni et al., 2002) for one professional translator P1 using the other three P2,3,4 as a reference set. We repeat the process four times, scoring each professional translator against the others, to calculate the expected range of professional quality translation. We can see how a translation set T (chosen by our model) compares to this range by calculating T’s BLEU scores against the same four sets of three reference translations. We will evaluate different strategies for selecting such a set T, and see how much each improves on the BLEU score, compared to randomly picking from among the Turker translations. We also evaluate Turker translation quality by using them as reference sets to score various submissions to the NIST MT evaluation. Specifically, we measure the correlation (using Pearson’s r) between BLEU scores of MT systems measured against nonprofessional translations, and BLEU scores measured against professional translations. Since the main purpose of the NIST dataset was to compare MT systems against each other, this is a more direct fitness-for-task measure. We chose the middle 6 systems (in terms of performance) submitted to the NIST evaluation, out of 12, as those systems were fairly close to each other, with less than 2 BLEU points separating them.2 5 Experimental Results We establish the performance of professional translators, calculate oracle upper bounds on Turker translation quality, and carry out a set of experiments that demonstrate the effectiveness of our model and that determine which features are most helpful. Each number reported in this section is an average of four numbers, corresponding to the four possible 2Using all 12 systems artificially inflates correlation, due to the vast differences between the systems. For instance, the top system outperforms the bottom system by 15 BLEU points! ways of choosing 3 of the 4 reference sets. Furthermore, each of those 4 numbers is itself based on a five-fold cross validation, where 80% of the data is used to compute feature values, and 20% used for evaluation. The 80% portion is used to compute the aggregate worker-level features. For the worker calibration feature, we utilize the references for 10% of the data (which is within the 80% portion). 5.1 Translation Quality: BLEU Scores Compared to Professionals We first evaluated the reference sets against each other, in order to quantify the concept of “professional quality”. On average, evaluating one reference set against the other three gives a BLEU score of 42.38 (Figure 3). A Turker set of translations scores 28.13 on average, which highlights the loss in quality when collecting translations from amateurs. To make the gap clearer, the output of a state-ofthe-art machine translation system (the syntax-based variant of Joshua; Li et al. (2010)) achieves a score of 26.91, a mere 1.22 worse than the Turkers. We perform two oracle experiments to determine if there exist high-quality Turker translations in the first place. The first oracle operates on the segment level: for each source segment, choose from the four translations the one that scores highest against the reference sentence. The second oracle operates on the worker level: for each source segment, choose from the four translations the one provided by the worker whose translations (over all sentences) score the highest. The two oracles achieve BLEU scores of 43.75 and 40.64, respectively – well within the range of professional translators. We examined two voting-inspired methods, since taking a majority vote usually works well when dealing with MTurk data. The first selects the translation with the minimum average TER (Snover et al., 2006) against the other three translations, since that would be a ‘consensus’ translation. The second method selects the translation that received the best average rank, using the rank labels assigned by other Turkers (see 3.3). These approaches achieve BLEU scores of 34.41 and 36.64, respectively. The main set of experiments evaluated the features from 4.1 and 4.3. We applied our approach using each of the four feature types: sentence features, Turker features, rank features, and the cali1225 26.91 28.13 43.75 40.64 34.41 36.64 42.38 34.95 35.79 37.14 37.82 39.06 20 25 30 35 40 45 Reference (ave.) Joshua (syntax) Turker (ave.) Oracle (segment) Oracle (Turker) Lowest TER Best rank Sentence features Turker features Rank features Calibration feature All features BLEU Figure 3: BLEU scores for different selection methods, measured against the reference sets. Each score is an average of four BLEU scores, each calculated against three LDC reference translations. The five right-most bars are colored in orange to indicate selection over a set that includes both original translations as well as edited versions of them. bration feature. That yielded BLEU scores ranging from 34.95 to 37.82. With all features combined, we achieve a higher score of 39.06, which is within the range of scores for the professional translators. 5.2 Fitness for a Task: Correlation With Professionals When Ranking MT Systems We evaluated the selection methods by measuring correlation with the references, in terms of BLEU scores assigned to outputs of MT systems. The results, in Table 1, tell a fairly similar story as evaluating with BLEU: references and oracles naturally perform very well, and the loss in quality when selecting arbitrary Turker translations is largely eliminated using our selection strategy. Interestingly, when using the Joshua output as a reference set, the performance is quite abysmal. Even though its BLEU score is comparable to the Turker translations, it cannot be used to distinguish closely matched MT systems from each other.3 6 Analysis The oracles indicate that there is usually an acceptable translation from the Turkers for any given sentence. Since the oracles select from a small group of only 4 translations per source segment, they are not overly optimistic, and rather reflect the true potential of the collected translations. The results indicate that, although some features are more useful than others, much of the benefit from combining all the features can be obtained from any one set of features, with the benefit of 3It should be noted that the Joshua system was not one of the six MT systems we scored in the correlation experiments. 34.71 35.45 37.14 37.22 37.96 20 25 30 35 40 45 Sentence features Turker features Rank features Calibration feature All features BLEU Figure 4: BLEU scores for the five right-most setups from Figure 3, constrained over the original translations. adding more features being somewhat orthogonal. Finally, we performed a series of experiments exploring the calibration feature, varying the amount of gold-standard references from 10% all the way up to 80%. As expected, the performance improved as more references were used to calibrate the translators (Figure 5). What’s particularly important about this experiment is that it shows the added benefit of the other features: We would have to use 30%– 40% of the references to get the same benefit obtained from combining the non-calibration features and only 10% for the calibration feature (dashed line in the Figure; BLEU = 39.06). 6.1 Cost Reduction While the combined cost of our data collection effort ($2,500; see 3.4) is quite low considering the amount of collected data, it would be more attractive if the cost could be reduced further without losing much in translation quality. To that end, we investigated lowering cost along two dimensions: eliminating the need for professional translations, and decreasing the amount of edited translations. 1226 Selection Method Pearson’s r2 Reference (ave.) 0.81 ± 0.07 Joshua (syntax) 0.08 ± 0.09 Turker (ave.) 0.60 ± 0.17 Oracle (segment) 0.81 ± 0.09 Oracle (Turker) 0.79 ± 0.10 Lowest TER 0.50 ± 0.26 Best rank 0.74 ± 0.17 Sentence features 0.56 ± 0.21 Turker features 0.59 ± 0.19 Rank features 0.75 ± 0.14 Calibration feature 0.76 ± 0.13 All features 0.77 ± 0.11 Table 1: Correlation (± std. dev.) for different selection methods, compared against the reference sets. The professional translations are used in our approach for computing the worker calibration feature (subsection 4.3) and for tuning the weights of the other features. We use a relatively small amount for this purpose, but we investigate a different setup whereby no professional translations are used at all. This eliminates the worker calibration feature, but, perhaps more critically, the feature weights must be set in a different fashion, since we cannot optimize BLEU on reference data anymore. Instead, we use the rank labels (from 3.3) as a proxy for BLEU, and set the weights so that better ranked translations receive higher scores. Note that the rank features will also be excluded in this setup, since they are perfect predictors of rank labels. On the one hand, this means no rank labels need to be collected, other than for a small set used for weight tuning, further reducing the cost of data collection. However, this leads to a significant drop in performance, yielding a BLEU score of 34.86. Another alternative for cost reduction would be to reduce the number of collected edited translations. To that end, we first investigate completely eliminating the editing phase, and considering only unedited translations. In other words, the selection will be over a group of four English sentences rather than 14 sentences. Completely eliminating the edited translations has an adverse effect, as expected (Figure 4). Another option, rather than eliminating the editing phase altogether, would be to consider the edited translations of only the translation receiving 37.0 37.5 38.0 38.5 39.0 39.5 40.0 40.5 0 20 40 60 80 100 % References Used for Calibration BLEU 10%+other features (i.e. "All features" from Figure 3) Figure 5: The effect of varying the amount of calibration data (and using only the calibration feature). The 10% point (BLEU = 37.82) and the dashed line (BLEU = 39.06) correspond to the two right-most bars of Figure 3. the best rank labels. This would reflect a data collection process whereby the editing task is delayed until after the rank labels are collected, with the rank labels used to determine which translations are most promising to post-edit (in addition to using the rank labels for the ranking features). Using this approach enables us to greatly reduce the number of edited translations collected, while maintaining good performance, obtaining a BLEU score of 38.67. It is therefore our recommendation that crowdsourced translation efforts adhere to the following pipeline: collect multiple translations for each source sentence, collect rank labels for the translations, and finally collect edited versions of the top ranked translations. 7 Related Work Dawid and Skene (1979) investigated filtering annotations using the EM algorithm, estimating annotator-specific error rates in the context of patient medical records. Snow et al. (2008) were among the first to use MTurk to obtain data for several NLP tasks, such as textual entailment and word sense disambiguation. Their approach, based on majority voting, had a component for annotator bias correction. They showed that for such tasks, a few nonexpert labels usually suffice. Whitehill et al. (2009) proposed a probabilistic model to filter labels from non-experts, in the context of an image labeling task. Their system generatively models image difficulty, as well as noisy, even 1227 adversarial, annotators. They apply their method to simulated labels rather than real-life labels. Callison-Burch (2009) proposed several ways to evaluate MT output on MTurk. One such method was to collect reference translations to score MT output. It was only a pilot study (50 sentences in each of several languages), but it showed the possibility of obtaining high-quality translations from non-professionals. As a followup, Bloodgood and Callison-Burch (2010) solicited a single translation of the NIST Urdu-to-English dataset we used. Their evaluation was similar to our correlation experiments, examining how well the collected translations agreed with the professional translations when evaluating three MT systems. That paper appeared in a NAACL 2010 workshop organized by Callison-Burch and Dredze (2010), focusing on MTurk as a source of data for speech and language tasks. Two relevant papers from that workshop were by Ambati and Vogel (2010), focusing on the design of the translation HIT, and by Irvine and Klementiev (2010), who created translation lexicons between English and 42 rare languages. Resnik et al. (2010) explore a very interesting way of creating translations on MTurk, relying only on monolingual speakers. Speakers of the target language iteratively identified problems in machine translation output, and speakers of the source language paraphrased the corresponding source portion. The paraphrased source would then be retranslated to produce a different translation, hopefully more coherent than the original. 8 Conclusion and Future Work We have demonstrated that it is possible to obtain high-quality translations from non-professional translators, and that the cost is an order of magnitude cheaper than professional translation. We believe that crowdsourcing can play a pivotal role in future efforts to create parallel translation datasets. Beyond the cost and scalability, crowdsourcing provides access to languages that currently fall outside the scope of statistical machine translation research. We have begun an ongoing effort to collect translations for several low resource languages, including Tamil, Yoruba, and dialectal Arabic. We plan to: • Investigate improvements from system combination techniques to the redundant translations. • Modify our editing step to collect an annotated corpus of English as a second language errors. • Calibrate against good Turkers, instead of professionals, once they have been identified. • Predict whether it is necessary to solicit another translation instead of collecting a fixed number. • Analyze how much quality matters if our goal is to train a statistical translation system. Acknowledgments This research was supported by the Human Language Technology Center of Excellence, by gifts from Google and Microsoft, and by the DARPA GALE program under Contract No. HR0011-06-20001. The views and findings are the authors’ alone. We would like to thank Ben Bederson, Philip Resnik, and Alain D´esilets for organizing workshops focused on crowdsourcing translation (Bederson and Resnik, 2010; D´esilets, 2010). We are grateful for the feedback of workshop participants, which helped shape this research. References Yaser Al-Onaizan, Ulrich Germann, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Daniel Marcu, and Kenji Yamada. 2002. Translation with scarce bilingual resources. Machine Translation, 17(1), March. Vamshi Ambati and Stephan Vogel. 2010. Can crowds build parallel corpora for machine translation systems? In Proceedings of the NAACL HLT Workshop on Creating Speech and Language Data With Amazon’s Mechanical Turk, pages 62–65. Ben Bederson and Philip Resnik. 2010. Workshop on crowdsourcing and translation. http://www.cs. umd.edu/hcil/monotrans/workshop/. Michael Bloodgood and Chris Callison-Burch. 2010. Using Mechanical Turk to build machine translation evaluation sets. In Proceedings of the NAACL HLT Workshop on Creating Speech and Language Data With Amazon’s Mechanical Turk, pages 208–211. Chris Callison-Burch and Mark Dredze. 2010. Creating speech and language data with Amazon’s Mechanical Turk. In Proceedings of the NAACL HLT Workshop on Creating Speech and Language Data With Amazon’s Mechanical Turk, pages 1–12. Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using Amazon’s Me1228 chanical Turk. In Proceedings of EMNLP, pages 286– 295. A. P. Dawid and A. M. Skene. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Applied Statistics, 28(1):20–28. Alain D´esilets. 2010. AMTA 2010 workshop on collaborative translation: technology, crowdsourcing, and the translator perspective. http://bit.ly/gPnqR2. Pascale Fung and Lo Yuen Yee. 1998. An ir approach for translating new words from nonparallel, comparable texts. In Proceedings of ACL/CoLing. Ulrich Germann. 2001. Building a statistical machine translation system from scratch: How much bang for the buck can we expect? In ACL 2001 Workshop on Data-Driven Machine Translation, Toulouse, France. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL/HLT. Panos Ipeirotis. 2010. New demographics of Mechanical Turk. http://behind-the-enemy-lines. blogspot.com/2010/03/ new-demographics-of-mechanical-turk. html. Ann Irvine and Alexandre Klementiev. 2010. Using Mechanical Turk to annotate lexicons for less commonly used languages. In Proceedings of the NAACL HLT Workshop on Creating Speech and Language Data With Amazon’s Mechanical Turk, pages 108–113. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Ann Irvine, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Ziyuan Wang, Jonathan Weese, and Omar Zaidan. 2010. Joshua 2.0: A toolkit for parsing-based machine translation with syntax, semirings, discriminative training and other goodies. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 133–137. Dragos Munteanu and Daniel Marcu. 2005. Improving machine translation performance by exploiting comparable corpora. Computational Linguistics, 31(4):477– 504, December. Sonja Niessen and Hermann Ney. 2004. Statistical machine translation with scarce resources using morpho-syntatic analysis. Computational Linguistics, 30(2):181–204. Doug Oard, David Doermann, Bonnie Dorr, Daqing He, Phillip Resnik, William Byrne, Sanjeeve Khudanpur, David Yarowsky, Anton Leuski, Philipp Koehn, and Kevin Knight. 2003. Desperately seeking Cebuano. In Proceedings of HLT/NAACL. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL, pages 160–167. Kishore Papineni, Salim Poukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL, pages 311–318. Katharina Probst, Lori Levin, Erik Peterson, Alon Lavie, and Jamie Carbonell. 2002. MT for minority languages using elicitation-based learning of syntactic transfer rules. Machine Translation, 17(4). Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of ACL. Philip Resnik and Noah Smith. 2003. The web as a parallel corpus. Computational Linguistics, 29(3):349– 380, September. Philip Resnik, Olivia Buzek, Chang Hu, Yakov Kronrod, Alex Quinn, and Benjamin Bederson. 2010. Improving translation via targeted paraphrasing. In Proceedings of EMNLP, pages 127–137. Charles Schafer and David Yarowsky. 2002. Inducing translation lexicons via diverse similarity measures and bridge languages. In Conference on Natural Language Learning-2002, pages 146–152. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 403–411, Los Angeles, California, June. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas (AMTA). Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast – but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP, pages 254–263. Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large scale parallel document mining for machine translation. In Proc. of the International Conference on Computational Linguistics (COLING). Jacob Whitehill, Paul Ruvolo, Tingfan Wu, Jacob Bergsma, and Javier Movellan. 2009. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Proceedings of NIPS, pages 2035–2043. Omar F. Zaidan. 2009. Z-MERT: A fully configurable open source tool for minimum error rate training of machine translation systems. The Prague Bulletin of Mathematical Linguistics, 91:79–88. 1229
2011
122
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1230–1238, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Statistical Tree Annotator and Its Applications Xiaoqiang Luo and Bing Zhao IBM T.J. Watson Research Center 1101 Kitchawan Road Yorktown Heights, NY 10598 {xiaoluo,zhaob}@us.ibm.com Abstract In many natural language applications, there is a need to enrich syntactical parse trees. We present a statistical tree annotator augmenting nodes with additional information. The annotator is generic and can be applied to a variety of applications. We report 3 such applications in this paper: predicting function tags; predicting null elements; and predicting whether a tree constituent is projectable in machine translation. Our function tag prediction system outperformssignificantly published results. 1 Introduction Syntactic parsing has made tremendous progress in the past 2 decades (Magerman, 1994; Ratnaparkhi, 1997; Collins, 1997; Charniak, 2000; Klein and Manning, 2003; Carreras et al., 2008), and accurate syntactic parsing is often assumed when developing other natural language applications. On the other hand, there are plenty of language applications where basic syntactic information is insufficient. For instance, in question answering, it is highly desirable to have the semantic information of a syntactic constituent, e.g., a noun-phrase (NP) is a person or an organization; an adverbial phrase is locative or temporal. As syntactic information has been widely used in machine translation systems (Yamada and Knight, 2001; Xiong et al., 2010; Shen et al., 2008; Chiang, 2010; Shen et al., 2010), an interesting question is to predict whether or not a syntactic constituent is projectable1 across a language pair. 1A constituent in the source language is projectable if it can be aligned to a contiguous span in the target language. Such problems can be abstracted as adding additional annotations to an existing tree structure. For example, the English Penn treebank (Marcus et al., 1993) contains function tags and many carry semantic information. To add semantic information to the basic syntactic trees, a logical step is to predict these function tags after syntactic parsing. For the problem of predicting projectable syntactic constituent, one can use a sentence alignment tool and syntactic trees on source sentences to create training data by annotating a tree node as projectable or not. A generic tree annotator can also open the door of solving other natural language problems so long as the problem can be cast as annotating tree nodes. As one such example, we will present how to predict empty elements for the Chinese language. Some of the above-mentioned problems have been studied before: predicting function tags were studied in (Blaheta and Charniak, 2000; Blaheta, 2003; Lintean and Rus, 2007a), and results of predicting and recovering empty elements can be found in (Dienes et al., 2003; Schmid, 2006; Campbell, 2004). In this work, we will show that these seemingly unrelated problems can be treated uniformly as adding annotations to an existing tree structure, which is the first goal of this work. Second, the proposed generic tree annotator can also be used to solve new problems: we will show how it can be used to predict projectable syntactic constituents. Third, the uniform treatment not only simplifies the model building process, but also affords us to concentrate on discovering most useful features for a particular application which often leads to improved performances, e.g, we find some features are very effective in predicting function tags and our system 1230 has significant lower error rate than (Blaheta and Charniak, 2000; Lintean and Rus, 2007a). The rest of the paper is organized as follows. Section 2 describes our tree annotator, which is a conditional log-linear model. Section 3 describes the features used in our system. Next, three applications of the proposed tree annotator are presented in Section 4: predicting English function tags, predicting Chinese empty elements and predicting Arabic projectable constituents. Section 5 compares our work with some related prior arts. 2 A MaxEnt Tree Annotator Model The input to the tree annotator is a tree T. While T can be of any type, we concentrate on the syntactic parse tree in this paper. The non-terminal nodes, N = {n : n ∈T} of T are associated with an order by which they are visited so that they can be indexed as n1, n2, · · · , n|T|, where |T| is the number of non-terminal nodes in T. As an example, Figure 1 shows a syntactic parse tree with the prefix order (i.e., the number at the up-right corner of each non-terminal node), where child nodes are visited recursively from left to right before the parent node is visited. Thus, the NP-SBJ node is visited first, followed by the NP spanning duo action, followed by the PP-CLR node etc. With a prescribed tree visit order, our tree annotator model predicts a symbol li, where li takes value from a predefined finite set L, for each non-terminal node ni in a sequential fashion: P(l1, · · · , l|T||T) = |T| Y i=1 P(li|l1, · · · , li−1, T) (1) The visit order is important since it determines what are in the conditioning of Eq. (1). P(li|l1, · · · , li−1, T) in this work is a conditional log linear (or MaxEnt) model (Berger et al., 1996): P(li|l1, · · · , li−1, T) = exp P k λkgk(li−1 1 , T, li)  Z(li−1 1 , T) (2) where Z(li−1 1 , T) = X x∈L exp X k λkgk(li−1 1 , T, x)  3 VBZ TO NN NN JJ Newsnight returns to duo action tonight NP VP S NP−TMP 2 4 5 6 NP−SBJ 1 PP−CLR NNP Figure 1: A sample tree: the number on the upright corner of each non-terminal node is the visit order. is the normalizing factor to ensure that P(li|l1, · · · , li−1, T) in Equation (2) is a probability and {gk(li−1 1 , T, li)} are feature functions. There are efficient training algorithms to find optimal weights relative to a labeled training data set once the feature functions {gk(li−1 1 , T, li)} are selected (Berger et al., 1996; Goodman, 2002; Malouf, 2002). In our work, we use the SCGIS training algorithm (Goodman, 2002), and the features used in our systems are detailed in the next section. Once a model is trained, at testing time it is applied to input tree nodes by the same order. Figure 1 highlights the prediction of the function tag for node 3(i.e., PP-CLR-node in the thickened box) after 2 shaded nodes (NP-SBJ node and NP node) are predicted. Note that by this time the predicted values are available to the system, while unvisited nodes (nodes in dashed boxes in Figure 1) can not provide such information. 3 Features The features used in our systems are tabulated in Table 1. Numbers in the first column are the feature indices. The second column contains a brief description of each feature, and the third column contains the feature value when the feature at the same row is applied to the PP-node of Figure 1 for the task of predicting function tags. Feature 1 through 8 are non-lexical features in that all of them are computed based on the labels or POS tags of neighboring nodes (e.g., Feature 4 computes the label or POS tag of the right most child), or the structure information (e.g., Feature 5 computes the number of child nodes). 1231 Feature 9 and 10 are computed from past predicted values. When predicting the function tag for the PP-node in Figure 1, there is no predicted value for its left-sibling and any of its child node. That’s why both feature values are NONE, a special symbol signifying that a node does not carry any function tag. If we were to predict the function tag for the VP-node, the value of Feature 9 would be SBJ, while Feature 10 will be instantiated twice with one value being CLR, another being TMP. No. Description Value 1 current node label PP 2 parent node label VP 3 left-most child label/tag TO 4 right-most child label/tag NP 5 number of child nodes 2 6 CFG rule PP->TO NP 7 label/tag of left sibling VBZ 8 label/tag of right sibling NP 9 predicted value of left-sibling NONE 10 predicted value of child nodes NONE 11 left-most internal word to 12 right-most internal word action 13 left neighboring external word returns 14 right neighboring external word tonight 15 head word of current node to 16 head word of parent node returns 17 is current node the head child false 18 label/tag of head child TO 19 predicted value of the head child NONE Table 1: Feature functions: the 2nd column contains the descriptions of each feature, and the 3rd column the feature value when it is applied to the PP-node in Figure 1. Feature 11 to 19 are lexical features or computed from head nodes. Feature 11 and 12 compute the node-internal boundary words, while Feature 13 and 14 compute the immediate node-external boundary words. Feature 15 to 19 rely on the head information. For instance, Feature 15 computes the head word of the current node, which is to for the PPnode in Figure 1. Feature 16 computes the same for the parent node. Feature 17 tests if the current node is the head of its parent. Feature 18 and 19 compute the label or POS tag and the predicted value of the head child, respectively. Besides the basic feature presented in Table 1, we also use conjunction features. For instance, applying the conjunction of Feature 1 and 18 to the PP-node in Figure 1 would yield a feature instance that captures the fact that the current node is a PP node and its head child’s POS tag is TO. 4 Applications and Results A wide variety of language problems can be treated as or cast into a tree annotating problem. In this section, we present three applications of the statistical tree annotator. The first application is to predict function tags of an input syntactic parse tree; the second one is to predict Chinese empty elements; and the third one is to predict whether a syntactic constituent of a source sentence is projectable, meaning if the constituent will have a contiguous translation on the target language. 4.1 Predicting Function Tags In the English Penn Treebank (Marcus et al., 1993) and more recent OntoNotes data (Hovy et al., 2006), some tree nodes are assigned a function tag, which is of one of the four types: grammatical, form/function, topicalization and miscellaneous. Table 2 contains a list of function tags used in the English Penn Treebank (Bies et al., 1995). The “Grammatical” row contains function tags marking the grammatical role of a constituent, e.g., DTV for dative objects, LGS for logical subjects etc. Many tags in the “Form/function” row carry semantic information, e.g., LOC is for locative expressions, and TMP for temporal expressions. Type Function Tags Grammatical (52.2%) DTV LGS PRD PUT SBJ VOC Form/function (36.2%) ADV BNF DIR EXT LOC MNR NOM PRP TMP Topicalization (2.2%) TPC Miscellaneous (9.4%) CLF CLR HLN TTL Table 2: Four types of function tags and their relative frequency 4.1.1 Comparison with Prior Arts In order to have a direct comparison with (Blaheta and Charniak, 2000; Lintean and Rus, 2007a), we use the same English Penn Treebank (Marcus et al., 1993) and partition the data set identically: Section 1232 2-21 of Wall Street Journal (WSJ) data for training and Section 23 as the test set. We use all features in Table 1 and build four models, each of which predicting one type of function tags. The results are tabulated in Table 3. As can be seen, our system performs much better than both (Blaheta and Charniak, 2000) and (Lintean and Rus, 2007a). For two major categories, namely grammatical and form/function which account for 96.84% non-null function tags in the test set, our system achieves a relative error reduction of 77.1% (from (Blaheta and Charniak, 2000)’s 1.09% to 0.25%) and 46.9%(from (Blaheta and Charniak, 2000)’s 2.90% to 1.54%) , respectively. The performance improvements result from a clean learning framework and some new features we introduced: e.g., the node-external features, i.e., Feature 13 and 14 in Table 1, can capture long-range statistical dependencies in the conditional model (2) and are proved very useful (cf. Section 4.1.2). As far as we can tell, they are not used in previous work. Type Blaheta00 Lintean07 Ours Grammar 98.91% 98.45% 99.75% Form/Func 97.10% 95.15% 98.46% topic 99.92% 99.87% 99.98% Misc 98.65% 98.54% 99.41% Table 3: Function tag prediction accuracies on gold parse trees: breakdown by types of function tags. The 2nd column is due to (Blaheta and Charniak, 2000) and 3rd column due to (Lintean and Rus, 2007a). Our results on the 4th column compare favorably with theirs. 4.1.2 Relative Contributions of Features Since the English WSJ data set contains newswire text, the most recent OntoNotes (Hovy et al., 2006) contains text from a more diversified genres such as broadcast news and broadcast conversation, we decide to test our system on this data set as well. WSJ Section 24 is used for development and Section 23 for test, and the rest is used as the training data. Note that some WSJ files were not included in the OntoNotes release and Section 23 in OntoNotes contains only 1640 sentences. The OntoNotes data statistics is tabulated in Table 4. Less than 2% of nodes with non-empty function tags were assigned multiple function tags. To simplify the system building, we take the first tag in training and testing and report the aggregated accuracy only in this section. #-sents #-nodes #-funcNodes training 71,186 1,242,747 280,755 test 1,640 31,117 6,778 Table 4: Statistics of OntoNotes: #-sents – number of sentences; #-nodes – number of non-terminal nodes; #-funcNodes – number of nodes containing non-empty function tags. We use this data set to test relative contributions of different feature groups by incrementally adding features into the system, and the results are reported in Table 5. The dummy baseline is predicting the most likely prior – the empty function tag, which indicates that there are 78.21% of nodes without a function tag. The next line reflects the performance of a system with non-lexical features only (Feature 1 to 8 in Table 1), and the result is fairly poor with an accuracy 91.51%. The past predictions (Feature 8 and 9) helps a bit by improving the accuracy to 92.04%. Node internal lexical features (Feature 11 and 12) are extremely useful: it added more than 3 points to the accuracy. So does the node external lexical features (Feature 13 and 14) which added an additional 1.52 points. Features computed from head words (Feature 15 to 19) carry information complementary to the lexical features and it helps quite a bit by improving the accuracy by 0.64%. When all features are used, the system reached an accuracy of 97.34%. From these results, we can conclude that, unlike syntactic parsing (Bikel, 2004), lexical information is extremely important for predicting and recovering function tags. This is not surprising since many function tags carry semantic information, and more often than not, the ambiguity can only be resolved by lexical information. E.g., whether a PP is locative or temporal PP is heavily influenced by the lexical choice of the NP argument. 4.2 Predicting Chinese Empty Elements As is well known, Chinese is a pro-drop language. This and its lack of subordinate conjunction complementizers lead to the ubiquitous use of empty elements in the Chinese treebank (Xue et al., 2005). Predicting or recovering these empty elements is therefore important for the Chinese language pro1233 Feature Set Accuracy prior (guess NONE) 78.21% Non-lexical labels only 91.52% +past prediction 92.04% +node-internal lexical 95.17% +node-external lexical 96.70% +head word 97.34% Table 5: Effects of feature sets: the second row contains the baseline result when always predicting NONE; Row 3 through 8 contain results by incrementally adding feature sets. cessing. Recently, Chung and Gildea (2010) has found it useful to recover empty elements in machine translation. Since empty elements do not have any surface string representation, we tackle the problem by attaching a pseudo function tag to an empty element’s lowest non-empty parent and then removing the subtree spanning it. Figure 2 contains an example tree before and after removing the empty element *pro* and annotating the non-empty parent with a pseudo function tag NoneL. The transformation procedure is summarized in Algorithm 1. In particular, line 2 of Algorithm 1 find the lowest parent of an empty element that spans at least one non-trace word. In the example in Figure 2, it would find the top IP-node. Since *pro* is the left-most child, line 4 of Algorithm 1 adds the pseudo function tag NoneL to the top IP-node. Line 9 then removes its NP child node and all lower children (i.e., shaded subtree in Figure 2(1)), resulting in the tree in Figure 2(2). Line 4 to 8 of Algorithm 1 indicate that there are 3 types of pseudo function tags: NoneL, NoneM, and NoneR, encoding a trace found in the left, middle or right position of its lowest non-empty parent. It’s trivial to recover a trace’s position in a sentence from NoneL, and NoneR, but it may be ambiguous for NoneM. The problem could be solved either using heuristics to determine the position of a middle empty element, or encoding the positional information in the pseudo function tag. Since here we just want to show that predicting empty elements can be cast as a tree annotation problem, we leave this option to future research. With this transform, the problem of predicting a trace is cast into predicting the corresponding JJ NN NN NN NP NP VP VP (1) Original tree with a trace (the left−most child of the top IP−node) NP NP VP VP NN NN NN AD VE JJ VV IP IP−NoneL ran2hou4 you3 zhuan3men2 dui4wu3 jin4xing2 jian1du1 jian3cha2 (2) After removing trace and its parent node (shaded subtree in (1)) NP NONE AD IP IP VV VE *pro* ran2hou4 you3 zhuan3men2 dui4wu3 jin4xing2 jian1du1 jian3cha2 Figure 2: Transform of traces in a Chinese parse tree by adding pseudo function tags. Algorithm 1 Procedure to remove empty elements and add pseudo function tags. Input: An input tree Output: a tree after removing traces (and their empty parents) and adding pseudo function tags to its lowest non-empty parent node 1:Foreach trace t 2: Find its lowest ancestor node p spanning at least one non-trace word 3: if t is p’s left-most child 4: add pseudo tag NoneL to p 5: else if t is p’s right-most child 6: add pseudo tag NoneR to p 7: else 8: add pseudo tag NoneM to p 9: Remove p’s child spanning the trace t and all its children 1234 pseudo function tag and the statistical tree annotator can thus be used to solve this problem. 4.2.1 Results We use Chinese Treebank v6.0 (Xue et al., 2005) and the broadcast conversation data from CTB v7.0 2. The data set is partitioned into training, development and blind test as shown in Table 6. The partition is created so that different genres are well represented in different subsets. The training, development and test set have 32925, 3297 and 3033 sentences, respectively. Subset File IDs Training 0001-0325, 0400-0454, 0600-0840 0500-0542, 2000-3000, 0590-0596 1001-1120, cctv,cnn,msnbc, phoenix 00-06 Dev 0841-0885, 0543-0548, 3001-3075 1121-1135, phoenix 07-09 Test 0900-0931,0549-0554, 3076-3145 1136-1151, phoenix 10-11 Table 6: Data partition for CTB6 and CTB 7’s broadcast conversation portion We then apply Algorithm 1 to transform trees and predict pseudo function tags. Out of 1,100,506 nonterminal nodes in the training data, 80,212 of them contain pseudo function tags. There are 94 nodes containing 2 pseudo function tags. The vast majority of pseudo tags – more then 99.7% – are attached to either IP, CP, or VP: 50971, 20113, 8900, respectively. We used all features in Table 1 and achieved an accuracy of 99.70% on the development data, and 99.71% on the test data on gold trees. To understand why the accuracies are so high, we look into the 5 most frequent labels carrying pseudo tags in the development set, and tabulate their performance in Table 7. The 2nd column contains the number of nodes in the reference; the 3rd column the number of nodes of system output; the 4th column the number of nodes with correct prediction; and the 5th column F-measure for each label. From Table 7, it is clear that CP-NoneL and IP-NoneL are easy to predict. This is not surprising, given that the Chinese language lacks of 2Many files are missing in LDC’s early 2010 release of CTB 7.0, but broadcast conversation portion is new and is used in our system. Label numRef numSys numCorr F1 CP-NoneL 1723 1724 1715 0.995 IP-NoneL 3874 3875 3844 0.992 VP-NoneR 660 633 597 0.923 IP-NoneM 440 432 408 0.936 VP-NoneL 135 107 105 0.868 Table 7: 5 most frequent labels carrying pseudo tags and their performances complementizers for subordinate clauses. In other words, left-most empty elements under CP are almost unambiguous: if a CP node has an immediate IP child, it almost always has a left-most empty element; similarly, if an IP node has a VP node as the left-most child (i.e., without a subject), it almost always should have a left empty element (e.g., marking the dropped pro). Another way to interpret these results is as follows: when developing the Chinese treebank, there is really no point to annotate leftmost traces for CP and IP when tree structures are available. On the other hand, predicting the left-most empty elements for VP is a lot harder: the F-measure is only 86.8% for VP-NoneL. Predicting the rightmost empty elements under VP and middle empty elements under IP is somewhat easier: VP-NoneR and IP-NoneM’s F-measures are 92.3% and 93.6%, respectively. 4.3 Predicting Projectable Constituents The third application is predicting projectable constituents for machine translation. State-of-the-art machine translation systems (Yamada and Knight, 2001; Xiong et al., 2010; Shen et al., 2008; Chiang, 2010; Shen et al., 2010) rely heavily on syntactic analysis. Projectable structures are important in that it is assumed in CFG-style translation rules that a source span can be translated contiguously. Clearly, not all source constituents can be translated this way, but if we can predict whether a non-terminal source node is projectable, we can avoid translation errors by bypassing or discouraging the derivation paths relying on non-projectable constituents, or using phrase-based approaches for non-projectable constituents. We start from LDC’s bilingual Arabic-English treebank with source human parse trees and alignments, and mark source constituents as either pro1235 NOUN b# sbb " " l# Alms&wl the Iraqi official ’s sudden obligations " . tAr}p AltzAmAt PREP Because of " NOUN S PP# NP#1 NP#2 NP PP NP AlErAqy . PUNC PREP DET+NOUN DET+ADJ ADJ PUNC PUNC Figure 3: An example to show how a source tree is annotated with its alignment with the target sentence. jectable or non-projectable. The binary annotations can again be treated as pseudo function tags and the proposed tree annotator can be readily applied to this problem. As an example, the top half of Figure 3 contains an Arabic sentence with its parse tree; the bottom is its English translation with the human wordalignment. There are three non-projectable constituents marked with “#”: the top PP# spanning the whole sentence except the final stop, and NP#1 and NP#2. The PP# node is not projectable due to an inserted stop from outside; NP#1 is not projectable because it is involved in a 2-to-2 alignment with the token b# outside NP#1; NP#2 is aligned to a span the Iraqi official ’s sudden obligations ., in which Iraqi official breaks the contiguity of the translation. It is clear that a CFG-like grammar will not be able to generate the translation for NP#2. The LDC’s Arabic-English bilingual treebank does not mark if a source node is projectable or not, but the information can be computed from word alignment. In our experiments, we processed 16,125 sentence pairs with human source trees for training, and 1,151 sentence pairs for testing. The statistics of the training and test data can be found in Table 8, where the number of sentences, the number of nonterminal nodes and the number of non-projectable nodes are listed in Column 2 through 4, respectively. Data Set #Sents #nodes #NonProj Training 16,125 558,365 121,201 Test 1,151 40,674 8,671 Table 8: Statistics of the data for predicting projectable constituents We get a 94.6% accuracy for predicting projectable constituents on the gold trees, and an 84.7% F-measure on the machine-generated parse trees. This component has been integrated into our machine translation system (Zhao et al., 2011). 5 Related Work Blaheta and Charniak (2000) used a feature tree model to predict function tags. The work was later extended to use the voted perceptron (Blaheta, 2003). There are considerable overlap in terms of features used in (Blaheta and Charniak, 2000; Blaheta, 2003) and our system: for example, the label of current node, parent node and sibling nodes. However, there are some features that are unique in our work, e.g., lexical features at a constituent boundaries (node-internal and node-external words). Table 2 of (Blaheta and Charniak, 2000) contains the ac1236 curacies for 4 types of function tags, and our results in Table 3 compare favorably with those in (Blaheta and Charniak, 2000). Lintean and Rus (2007a; Lintean and Rus (2007b) also studied the function tagging problem and applied naive Bayes and decision tree to it. Their accuracy results are worse than (Blaheta and Charniak, 2000). Neither (Blaheta and Charniak, 2000) nor (Lintean and Rus, 2007a; Lintean and Rus, 2007b) reported the relative usefulness of different features, while we found that the lexical features are extremely useful. Campbell (2004) and Schmid (2006) studied the problem of predicting and recovering empty categories, but they used very different approaches: in (Campbell, 2004), a rule-based approach is used while (Schmid, 2006) used a non-lexical PCFG similar to (Klein and Manning, 2003). Chung and Gildea (2010) studied the effects of empty categories on machine translation and they found that even with noisy machine predictions, empty categories still helped machine translation. In this paper, we showed that empty categories can be encoded as pseudo function tags and thus predicting and recovering empty categories can be cast as a tree annotating problem. Our results also shed light on some empty categories can almost be determined unambiguously, given a gold tree structure, which suggests that these empty elements do not need to be annotated. Gabbard et al. (2006) modified Collins’ parser to output function tags. Since their results for predicting function tags are on system parses, they are not comparable with ours. (Gabbard et al., 2006) also contains a second stage employing multiple classifiers to recover empty categories and resolve coindexations between an empty element and its antecedent. As for predicting projectable constituent, it is related to the work described in (Xiong et al., 2010), where they were predicting translation boundaries. A major difference is that (Xiong et al., 2010) defines projectable spans on a left-branching derivation tree solely for their phrase decoder and models, while translation boundaries in our work are defined from source parse trees. Our work uses more resources, but the prediction accuracy is higher (modulated on a different test data): we get a F-measure 84.7%, in contrast with (Xiong et al., 2010)’s 71%. 6 Conclusions and Future Work We proposed a generic statistical tree annotator in the paper. We have shown that a variety of natural language problems can be tackled with the proposed tree annotator, from predicting function tags, predicting empty categories, to predicting projectable syntactic constituents for machine translation. Our results of predicting function tags compare favorably with published results on the same data set, possibly due to new features employed in the system. We showed that empty categories can be represented as pseudo function tags, and thus predicting empty categories can be solved with the proposed tree annotator. The same technique can be used to predict projectable syntactic constituents for machine translation. There are several directions to expand the work described in this paper. First, the results for predicting function tags and Chinese empty elements were obtained on human-annotated trees and it would be interesting to do it on parse trees generated by system. Second, predicting projectable constituents is for improving machine translation and we are integrating the component into a syntax-based machine translation system. Acknowledgments This work was partially supported by the Defense Advanced Research Projects Agency under contract No. HR0011-08-C-0110. The views and findings contained in this material are those of the authors and do not necessarily reflect the position or policy of the U.S. government and no official endorsement should be inferred. We are also grateful to three anonymous reviewers for their suggestions and comments for improving the paper. References Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71, March. Ann Bies, Mark Ferguson, and karen Katz. 1995. Bracketing guidelines for treebank II-style penn treebank project. Technical report, Linguistic Data Consortium. Daniel M. Bikel. 2004. A distributional analysis of a lexicalized statistical parsing model. In Dekang Lin 1237 and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 182–189, Barcelona, Spain, July. Association for Computational Linguistics. Don Blaheta and Eugene Charniak. 2000. Assigning function tags to parsed text. In Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics, pages 234–240. Don Blaheta. 2003. Function Tagging. Ph.D. thesis, Brown University. Richard Campbell. 2004. Using linguistic principles to recover empty categories. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 645–652, Barcelona, Spain, July. Xavier Carreras, Michael Collins, and Terry Koo. 2008. TAG, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of CoNLL. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL, Seattle. David Chiang. 2010. Learning to translate with source and target syntax. In Proc. ACL, pages 1443–1452. Tagyoung Chung and Daniel Gildea. 2010. Effects of empty categories on machine translation. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 636–645, Cambridge, MA, October. Association for Computational Linguistics. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proc. Annual Meeting of ACL, pages 16–23. Peter Dienes, P Eter Dienes, and Amit Dubey. 2003. Antecedent recovery: Experiments with a trace tagger. In In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 33–40. Ryan Gabbard, Mitchell Marcus, and Seth Kulick. 2006. Fully parsing the Penn Treebank. In Proceedings of Human Language Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics. Joshua Goodman. 2002. Sequential conditional generalized iterative scaling. In Pro. of the 40th ACL. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 57–60, New York City, USA, June. Association for Computational Linguistics. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Erhard Hinrichs and Dan Roth, editors, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430. Mihai Lintean and V. Rus. 2007a. Large scale experiments with function tagging. In Proceedings of the International Conference on Knowledge Engineering, pages 1–7. Mihai Lintean and V. Rus. 2007b. Naive Bayes and decision trees for function tagging. In Proceedings of the International Conference of the FLAIRS-2007. David M. Magerman. 1994. Natural Language Parsing As Statistical Pattern Recognition. Ph.D. thesis, Stanford University. Robert Malouf. 2002. A comparison of algorithms for maximum entropy parameter estimation. In the Sixth Conference on Natural Language Learning (CoNLL2002), pages 49–55. M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn treebank. Computational Linguistics, 19(2):313–330. Adwait Ratnaparkhi. 1997. A Linear Observed Time Statistical Parser Based on Maximum Entropy Models. In Second Conference on Empirical Methods in Natural Language Processing, pages 1 – 10. Helmut Schmid. 2006. Trace prediction and recovery with unlexicalized PCFGs and slash features. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 177–184, Sydney, Australia, July. Association for Computational Linguistics. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL. Libin Shen, Bing Zhang, Spyros Matsoukas, Jinxi Xu, and Ralph Weischedel. 2010. Statistical machine translation with a factorized grammar. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 616–625, Cambridge, MA, October. Association for Computational Linguistics. Deyi Xiong, Min Zhang, and Haizhou Li. 2010. Learning translation boundaries for phrase-based decoding. In NAACL-HLT 2010. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proc. Annual Meeting of the Association for Computational Linguistics. Bing Zhao, , Young-Suk Lee, Xiaoqiang Luo, and Liu Li. 2011. Learning to transform and select elementary trees for improved syntax-based machine translations. In Proc. of ACL. 1238
2011
123
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1239–1248, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Consistent Translation using Discriminative Learning: A Translation Memory-inspired Approach∗ Yanjun Ma† Yifan He‡ Andy Way‡ Josef van Genabith‡ † Baidu Inc., Beijing, China [email protected] ‡Centre for Next Generation Localisation School of Computing, Dublin City University {yhe,away,josef}@computing.dcu.ie Abstract We present a discriminative learning method to improve the consistency of translations in phrase-based Statistical Machine Translation (SMT) systems. Our method is inspired by Translation Memory (TM) systems which are widely used by human translators in industrial settings. We constrain the translation of an input sentence using the most similar ‘translation example’ retrieved from the TM. Differently from previous research which used simple fuzzy match thresholds, these constraints are imposed using discriminative learning to optimise the translation performance. We observe that using this method can benefit the SMT system by not only producing consistent translations, but also improved translation outputs. We report a 0.9 point improvement in terms of BLEU score on English–Chinese technical documents. 1 Introduction Translation consistency is an important factor for large-scale translation, especially for domainspecific translations in an industrial environment. For example, in the translation of technical documents, lexical as well as structural consistency is essential to produce a fluent target-language sentence. Moreover, even in the case of translation errors, consistency in the errors (e.g. repetitive error patterns) are easier to diagnose and subsequently correct by translators. ∗This work was done while the first author was in the Centre for Next Generation Localisation at Dublin City University. In phrase-based SMT, translation models and language models are automatically learned and/or generalised from the training data, and a translation is produced by maximising a weighted combination of these models. Given that global contextual information is not normally incorporated, and that training data is usually noisy in nature, there is no guarantee that an SMT system can produce translations in a consistent manner. On the other hand, TM systems – widely used by translators in industrial environments for enterprise localisation by translators – can shed some light on mitigating this limitation. TM systems can assist translators by retrieving and displaying previously translated similar ‘example’ sentences (displayed as source-target pairs, widely called ‘fuzzy matches’ in the localisation industry (Sikes, 2007)). In TM systems, fuzzy matches are retrieved by calculating the similarity or the so-called ‘fuzzy match score’ (ranging from 0 to 1 with 0 indicating no matches and 1 indicating a full match) between the input sentence and sentences in the source side of the translation memory. When presented with fuzzy matches, translators can then avail of useful chunks in previous translations while composing the translation of a new sentence. Most translators only consider a few sentences that are most similar to the current input sentence; this process can inherently improve the consistency of translation, given that the new translations produced by translators are likely to be similar to the target side of the fuzzy match they have consulted. Previous research as discussed in detail in Sec1239 tion 2 has focused on using fuzzy match score as a threshold when using the target side of the fuzzy matches to constrain the translation of the input sentence. In our approach, we use a more finegrained discriminative learning method to determine whether the target side of the fuzzy matches should be used as a constraint in translating the input sentence. We demonstrate that our method can consistently improve translation quality. The rest of the paper is organized as follows: we begin by briefly introducing related research in Section 2. We present our discriminative learning method for consistent translation in Section 3 and our feature design in Section 4. We report the experimental results in Section 5 and conclude the paper and point out avenues for future research in Section 6. 2 Related Research Despite the fact that TM and MT integration has long existed as a major challenge in the localisation industry, it has only recently received attention in main-stream MT research. One can loosely combine TM and MT at sentence (called segments in TMs) level by choosing one of them (or both) to recommend to the translators using automatic classifiers (He et al., 2010), or simply using fuzzy match score or MT confidence measures (Specia et al., 2009). One can also tightly integrate TM with MT at the sub-sentence level. The basic idea is as follows: given a source sentence to translate, we firstly use a TM system to retrieve the most similar ‘example’ source sentences together with their translations. If matched chunks between input sentence and fuzzy matches can be detected, we can directly re-use the corresponding parts of the translation in the fuzzy matches, and use an MT system to translate the remaining chunks. As a matter of fact, implementing this idea is pretty straightforward: a TM system can easily detect the word alignment between the input sentence and the source side of the fuzzy match by retracing the paths used in calculating the fuzzy match score. To obtain the translation for the matched chunks, we just require the word alignment between source and target TM matches, which can be addressed using state-of-the-art word alignment techniques. More importantly, albeit not explicitly spelled out in previous work, this method can potentially increase the consistency of translation, as the translation of new input sentences is closely informed and guided (or constrained) by previously translated sentences. There are several different ways of using the translation information derived from fuzzy matches, with the following two being the most widely adopted: 1) to add these translations into a phrase table as in (Bic¸ici and Dymetman, 2008; Simard and Isabelle, 2009), or 2) to mark up the input sentence using the relevant chunk translations in the fuzzy match, and to use an MT system to translate the parts that are not marked up, as in (Smith and Clark, 2009; Koehn and Senellart, 2010; Zhechev and van Genabith, 2010). It is worth mentioning that translation consistency was not explicitly regarded as their primary motivation in this previous work. Our research follows the direction of the second strand given that consistency can no longer be guaranteed by constructing another phrase table. However, to categorically reuse the translations of matched chunks without any differentiation could generate inferior translations given the fact that the context of these matched chunks in the input sentence could be completely different from the source side of the fuzzy match. To address this problem, both (Koehn and Senellart, 2010) and (Zhechev and van Genabith, 2010) used fuzzy match score as a threshold to determine whether to reuse the translations of the matched chunks. For example, (Koehn and Senellart, 2010) showed that reusing these translations as large rules in a hierarchical system (Chiang, 2005) can be beneficial when the fuzzy match score is above 70%, while (Zhechev and van Genabith, 2010) reported that it is only beneficial to a phrase-based system when the fuzzy match score is above 90%. Despite being an informative measure, using fuzzy match score as a threshold has a number of limitations. Given the fact that fuzzy match score is normally calculated based on Edit Distance (Levenshtein, 1966), a low score does not necessarily imply that the fuzzy match is harmful when used to constrain an input sentence. For example, in longer sentences where fuzzy match scores tend to be low, some chunks and the corresponding translations within the sentences can still be useful. On 1240 the other hand, a high score cannot fully guarantee the usefulness of a particular translation. We address this problem using discriminative learning. 3 Constrained Translation with Discriminative Learning 3.1 Formulation of the Problem Given a sentence e to translate, we retrieve the most similar sentence e′ from the translation memory associated with target translation f ′. The m common “phrases” ¯em 1 between e and e′ can be identified. Given the word alignment information between e′ and f ′, one can easily obtain the corresponding translations ¯f ′m 1 for each of the phrases in ¯em 1 . This process can derive a number of “phrase pairs” < ¯em, ¯f ′m >, which can be used to specify the translations of the matched phrases in the input sentence. The remaining words without specified translations will be translated by an MT system. For example, given an input sentence e1e2 · · · eiei+1 · · · eI, and a phrase pair < ¯e, ¯f ′ >, ¯e = eiei+1, ¯f ′ = f ′ jf ′ j+1 derived from the fuzzy match, we can mark up the input sentence as: e1e2 · · · <tm=“f ′ jf ′ j+1”> eiei+1 < /tm> · · · eI. Our method to constrain the translations using TM fuzzy matches is similar to (Koehn and Senellart, 2010), except that the word alignment between e′ and f ′ is the intersection of bidirectional GIZA++ (Och and Ney, 2003) posterior alignments. We use the intersected word alignment to minimise the noise introduced by word alignment of only one direction in marking up the input sentence. 3.2 Discriminative Learning Whether the translation information from the fuzzy matches should be used or not (i.e. whether the input sentence should be marked up) is determined using a discriminative learning procedure. The translation information refers to the “phrase pairs” derived using the method described in Section 3.1. We cast this problem as a binary classification problem. 3.2.1 Support Vector Machines SVMs (Cortes and Vapnik, 1995) are binary classifiers that classify an input instance based on decision rules which minimise the regularised error function in (1): min w,b,ξ 1 2wT w + C l X i=1 ξi s. t. yi(wT φ(xi) + b) ⩾1 −ξi ξi ⩾0 (1) where (xi, yi) ∈Rn × {+1, −1} are l training instances that are mapped by the function φ to a higher dimensional space. w is the weight vector, ξ is the relaxation variable and C > 0 is the penalty parameter. Solving SVMs is viable using a kernel function K in (1) with K(xi, xj) = Φ(xi)T Φ(xj). We perform our experiments with the Radial Basis Function (RBF) kernel, as in (2): K(xi, xj) = exp(−γ||xi −xj||2), γ > 0 (2) When using SVMs with the RBF kernel, we have two free parameters to tune on: the cost parameter C in (1) and the radius parameter γ in (2). In each of our experimental settings, the parameters C and γ are optimised by a brute-force grid search. The classification result of each set of parameters is evaluated by cross validation on the training set. The SVM classifier will thus be able to predict the usefulness of the TM fuzzy match, and determine whether the input sentence should be marked up using relevant phrase pairs derived from the fuzzy match before sending it to the SMT system for translation. The classifier uses features such as the fuzzy match score, the phrase and lexical translation probabilities of these relevant phrase pairs, and additional syntactic dependency features. Ideally the classifier will decide to mark up the input sentence if the translations of the marked phrases are accurate when taken contextual information into account. As large-scale manually annotated data is not available for this task, we use automatic TER scores (Snover et al., 2006) as the measure for training data annotation. We label the training examples as in (3): y = ( +1 if T ER(w. markup) < T ER(w/o markup) −1 if T ER(w/o markup) ≥T ER(w. markup) (3) Each instance is associated with a set of features which are discussed in more detail in Section 4. 1241 3.2.2 Classification Confidence Estimation We use the techniques proposed by (Platt, 1999) and improved by (Lin et al., 2007) to convert classification margin to posterior probability, so that we can easily threshold our classifier (cf. Section 5.4.2). Platt’s method estimates the posterior probability with a sigmoid function, as in (4): Pr(y = 1|x) ≈PA,B(f) ≡ 1 1 + exp(Af + B) (4) where f = f(x) is the decision function of the estimated SVM. A and B are parameters that minimise the cross-entropy error function F on the training data, as in (5): min z=(A,B) F(z) = − l X i=1 (tilog(pi) + (1 −ti)log(1 −pi)), where pi = PA,B(fi), and ti = ( N++1 N++2 if yi = +1 1 N−+2 if yi = −1 (5) where z = (A, B) is a parameter setting, and N+ and N−are the numbers of observed positive and negative examples, respectively, for the label yi. These numbers are obtained using an internal crossvalidation on the training set. 4 Feature Set The features used to train the discriminative classifier, all on the sentence level, are described in the following sections. 4.1 The TM Feature The TM feature is the fuzzy match score, which indicates the overall similarity between the input sentence and the source side of the TM output. If the input sentence is similar to the source side of the matching segment, it is more likely that the matching segment can be used to mark up the input sentence. The calculation of the fuzzy match score itself is one of the core technologies in TM systems, and varies among different vendors. We compute fuzzy match cost as the minimum Edit Distance (Levenshtein, 1966) between the source and TM entry, normalised by the length of the source as in (6), as most of the current implementations are based on edit distance while allowing some additional flexible matching. hfm(e) = min s EditDistance(e, s) Len(e) (6) where e is the sentence to translate, and s is the source side of an entry in the TM. For fuzzy match scores F, hfm roughly corresponds to 1 −F. 4.2 Translation Features We use four features related to translation probabilities, i.e. the phrase translation and lexical probabilities for the phrase pairs < ¯em, ¯f ′m > derived using the method in Section 3.1. Specifically, we use the phrase translation probabilities p( ¯f ′m|¯em) and p(¯em| ¯f ′m), as well as the lexical translation probabilities plex( ¯f ′m|¯em) and plex(¯em| ¯f ′m) as calculated in (Koehn et al., 2003). In cases where multiple phrase pairs are used to mark up one single input sentence e, we use a unified score for each of the four features, which is an average over the corresponding feature in each phrase pair. The intuition behind these features is as follows: phrase pairs < ¯em, ¯f ′m > derived from the fuzzy match should also be reliable with respect to statistically produced models. We also have a count feature, i.e. the number of phrases used to mark up the input sentence, and a binary feature, i.e. whether the phrase table contains at least one phrase pair < ¯em, ¯f ′m > that is used to mark up the input sentence. 4.3 Dependency Features Given the phrase pairs < ¯em, ¯f ′m > derived from the fuzzy match, and used to translate the corresponding chunks of the input sentence (cf. Section 3.1), these translations are more likely to be coherent in the context of the particular input sentence if the matched parts on the input side are syntactically and semantically related. For matched phrases ¯em between the input sentence and the source side of the fuzzy match, we define the contextual information of the input side using dependency relations between words em in ¯em and the remaining words ej in the input sentence e. We use the Stanford parser to obtain the dependency structure of the input sentence. We add a pseudo-label SYS PUNCT to punctuation marks, whose governor and dependent are both the punctuation mark. The dependency features designed to capture the context of the matched input phrases ¯em are as follows: 1242 Coverage features measure the coverage of dependency labels on the input sentence in order to obtain a bigger picture of the matched parts in the input. For each dependency label L, we consider its head or modifier as covered if the corresponding input word em is covered by a matched phrase ¯em. Our coverage features are the frequencies of governor and dependent coverage calculated separately for each dependency label. Position features identify whether the head and the tail of a sentence are matched, as these are the cases in which the matched translation is not affected by the preceding words (when it is the head) or following words (when it is the tail), and is therefore more reliable. The feature is set to 1 if this happens, and to 0 otherwise. We distinguish among the possible dependency labels, the head or the tail of the sentence, and whether the aligned word is the governor or the dependent. As a result, each permutation of these possibilities constitutes a distinct binary feature. The consistency feature is a single feature which determines whether matched phrases ¯em belong to a consistent dependency structure, instead of being distributed discontinuously around in the input sentence. We assume that a consistent structure is less influenced by its surrounding context. We set this feature to 1 if every word in ¯em is dependent on another word in ¯em, and to 0 otherwise. 5 Experiments 5.1 Experimental Setup Our data set is an English–Chinese translation memory with technical translation from Symantec, consisting of 87K sentence pairs. The average sentence length of the English training set is 13.3 words and the size of the training set is comparable to the larger TMs used in the industry. Detailed corpus statistics about the training, development and test sets for the SMT system are shown in Table 1. The composition of test subsets based on fuzzy match scores is shown in Table 2. We can see that sentences in the test sets are longer than those in the training data, implying a relatively difficult translation task. We train the SVM classifier using the libSVM (Chang and Lin, 2001) toolkit. The SVMTrain Develop Test SENTENCES 86,602 762 943 ENG. TOKENS 1,148,126 13,955 20,786 ENG. VOC. 13,074 3,212 3,115 CHI. TOKENS 1,171,322 10,791 16,375 CHI. VOC. 12,823 3,212 1,431 Table 1: Corpus Statistics Scores Sentences Words W/S (0.9, 1.0) 80 1526 19.0750 (0.8, 0.9] 96 1430 14.8958 (0.7, 0.8] 110 1596 14.5091 (0.6, 0.7] 74 1031 13.9324 (0.5, 0.6] 104 1811 17.4135 (0, 0.5] 479 8972 18.7307 Table 2: Composition of test subsets based on fuzzy match scores training and validation is on the same training sentences1 as the SMT system with 5-fold cross validation. The SVM hyper-parameters are tuned using the training data of the first fold in the 5-fold cross validation via a brute force grid search. More specifically, for parameter C in (1), we search in the range [2−5, 215], while for parameter γ (2) we search in the range [2−15, 23]. The step size is 2 on the exponent. We conducted experiments using a standard loglinear PB-SMT model: GIZA++ implementation of IBM word alignment model 4 (Och and Ney, 2003), the refinement and phrase-extraction heuristics described in (Koehn et al., 2003), minimum-errorrate training (Och, 2003), a 5-gram language model with Kneser-Ney smoothing (Kneser and Ney, 1995) trained with SRILM (Stolcke, 2002) on the Chinese side of the training data, and Moses (Koehn et al., 2007) which is capable of handling user-specified translations for some portions of the input during decoding. The maximum phrase length is set to 7. 5.2 Evaluation The performance of the phrase-based SMT system is measured by BLEU score (Papineni et al., 2002) and TER (Snover et al., 2006). Significance test1We have around 87K sentence pairs in our training data. However, for 67.5% of the input sentences, our MT system produces the same translation irrespective of whether the input sentence is marked up or not. 1243 ing is carried out using approximate randomisation (Noreen, 1989) with a 95% confidence level. We also measure the quality of the classification by precision and recall. Let A be the set of predicted markup input sentences, and B be the set of input sentences where the markup version has a lower TER score than the plain version. We standardly define precision P and recall R as in (7): P = |A T B| |A| , R = |A T B| |B| (7) 5.3 Cross-fold translation In order to obtain training samples for the classifier, we need to label each sentence in the SMT training data as to whether marking up the sentence can produce better translations. To achieve this, we translate both the marked-up versions and plain versions of the sentence and compare the two translations using the sentence-level evaluation metric TER. We do not make use of additional training data to translate the sentences for SMT training, but instead use cross-fold translation. We create a new training corpus T by keeping 95% of the sentences in the original training corpus, and creating a new test corpus H by using the remaining 5% of the sentences. Using this scheme we make 20 different pairs of corpora (Ti, Hi) in such a way that each sentence from the original training corpus is in exactly one Hi for some 1 ≤i ≤20. We train 20 different systems using each Ti, and use each system to translate the corresponding Hi as well as the marked-up version of Hi using the procedure described in Section 3.1. The development set is kept the same for all systems. 5.4 Experimental Results 5.4.1 Translation Results Table 3 contains the translation results of the SMT system when we use discriminative learning to mark up the input sentence (MARKUP-DL). The first row (BASELINE) is the result of translating plain test sets without any markup, while the second row is the result when all the test sentences are marked up. We also report the oracle scores, i.e. the upperbound of using our discriminative learning approach. As we can see from this table, we obtain significantly inferior results compared to the the Baseline system if we categorically mark up all the inTER BLEU BASELINE 39.82 45.80 MARKUP 41.62 44.41 MARKUP-DL 39.61 46.46 ORACLE 37.27 48.32 Table 3: Performance of Discriminative Learning (%) put sentences using phrase pairs derived from fuzzy matches. This is reflected by an absolute 1.4 point drop in BLEU score and a 1.8 point increase in TER. On the other hand, both the oracle BLEU and TER scores represent as much as a 2.5 point improvement over the baseline. Our discriminative learning method (MARKUP-DL), which automatically classifies whether an input sentence should be marked up, leads to an increase of 0.7 absolute BLEU points over the BASELINE, which is statistically significant. We also observe a slight decrease in TER compared to the BASELINE. Despite there being much room for further improvement when compared to the Oracle score, the discriminative learning method appears to be effective not only in maintaining translation consistency, but also a statistically significant improvement in translation quality. 5.4.2 Classification Confidence Thresholding To further analyse our discriminative learning approach, we report the classification results on the test set using the SVM classifier. We also investigate the use of classification confidence, as described in Section 3.2.2, as a threshold to boost classification precision if required. Table 4 shows the classification and translation results when we use different confidence thresholds. The default classification confidence is 0.50, and the corresponding translation results were described in Section 5.4.1. We investigate the impact of increasing classification confidence on the performance of the classifier and the translation results. As can be seen from Table 4, increasing the classification confidence up to 0.70 leads to a steady increase in classification precision with a corresponding sacrifice in recall. The fluctuation in classification performance has an impact on the translation results as measured by BLEU and TER. We can see that the best BLEU as well as TER scores are achieved when we set the classification confidence to 0.60, representing a modest improve1244 Classification Confidence 0.50 0.55 0.60 0.65 0.70 0.75 0.80 BLEU 46.46 46.65 46.69 46.59 46.34 46.06 46.00 TER 39.61 39.46 39.32 39.36 39.52 39.71 39.71 P 60.00 68.67 70.31 74.47 72.97 64.28 88.89 R 32.14 29.08 22.96 17.86 13.78 9.18 4.08 Table 4: The impact of classification confidence thresholding ment over the default setting (0.50). Despite the higher precision when the confidence is set to 0.7, the dramatic decrease in recall cannot be compensated for by the increase in precision. We can also observe from Table 4 that the recall is quite low across the board, and the classification results become unstable when we further increase the level of confidence to above 0.70. This indicates the degree of difficulty of this classification task, and suggests some directions for future research as discussed at the end of this paper. 5.4.3 Comparison with Previous Work As discussed in Section 2, both (Koehn and Senellart, 2010) and (Zhechev and van Genabith, 2010) used fuzzy match score to determine whether the input sentences should be marked up. The input sentences are only marked up when the fuzzy match score is above a certain threshold. We present the results using this method in Table 5. From this taFuzzy Match Scores 0.50 0.60 0.70 0.80 0.90 BLEU 45.13 45.55 45.58 45.84 45.82 TER 40.99 40.62 40.56 40.29 40.07 Table 5: Performance using fuzzy match score for classification ble, we can see an inferior performance compared to the BASELINE results (cf. Table 3) when the fuzzy match score is below 0.70. A modest gain can only be achieved when the fuzzy match score is above 0.8. This is slightly different from the conclusions drawn in (Koehn and Senellart, 2010), where gains are observed when the fuzzy match score is above 0.7, and in (Zhechev and van Genabith, 2010) where gains are only observed when the score is above 0.9. Comparing Table 5 with Table 4, we can see that our classification method is more effective. This confirms our argument in the last paragraph of Section 2, namely that fuzzy match score is not informative enough to determine the usefulness of the subsentences in a fuzzy match, and that a more comprehensive set of features, as we have explored in this paper, is essential for the discriminative learningbased method to work. FM Scores w. markup w/o markup [0,0.5] 37.75 62.24 (0.5,0.6] 40.64 59.36 (0.6,0.7] 40.94 59.06 (0.7,0.8] 46.67 53.33 (0.8,0.9] 54.28 45.72 (0.9,1.0] 44.14 55.86 Table 6: Percentage of training sentences with markup vs without markup grouped by fuzzy match (FM) score ranges To further validate our assumption, we analyse the training sentences by grouping them according to their fuzzy match score ranges. For each group of sentences, we calculate the percentage of sentences where markup (and respectively without markup) can produce better translations. The statistics are shown in Table 6. We can see that for sentences with fuzzy match scores lower than 0.8, more sentences can be better translated without markup. For sentences where fuzzy match scores are within the range (0.8, 0.9], more sentences can be better translated with markup. However, within the range (0.9, 1.0], surprisingly, actually more sentences receive better translation without markup. This indicates that fuzzy match score is not a good measure to predict whether fuzzy matches are beneficial when used to constrain the translation of an input sentence. 5.5 Contribution of Features We also investigated the contribution of our different feature sets. We are especially interested in the contribution of dependency features, as they re1245 Example 1 w/o markup after policy name , type the name of the policy ( it shows new host integrity policy by default ) . Translation 在“ 策略” 名称后面,键入策略的名称( 名称显示为“ 新主机完整性 策略默认)。 w. markup after policy name <tm translation=“,键入策略名称(默认显示“ 新 主机完整性策略” )。”>, type the name of the policy ( it shows new host integrity policy by default ) .< /tm> Translation 在“ 策略” 名称后面,键入策略名称(默认显示“ 新主机完整性策略” )。 Reference 在“ 策略名称” 后面,键入策略名称(默认显示“ 新主机完整性策略” )。 Example 2 w/o markup changes apply only to the specific scan that you select . Translation 更改仅适用于特定扫描的规则。 w. markup changes apply only to the specific scan that you select <tm translation=“。”>.< /tm> Translation 更改仅适用于您选择的特定扫描。 Reference 更改只应用于您选择的特定扫描。 flect whether translation consistency can be captured using syntactic knowledge. The classification and TER BLEU P R TM+TRANS 40.57 45.51 52.48 27.04 +DEP 39.61 46.46 60.00 32.14 Table 7: Contribution of Features (%) translation results using different features are reported in Table 7. We observe a significant improvement in both classification precision and recall by adding dependency (DEP) features on top of TM and translation features. As a result, the translation quality also significantly improves. This indicates that dependency features which can capture structural and semantic similarities are effective in gauging the usefulness of the phrase pairs derived from the fuzzy matches. Note also that without including the dependency features, our discriminative learning method cannot outperform the BASELINE (cf. Table 3) in terms of translation quality. 5.6 Improved Translations In order to pinpoint the sources of improvements by marking up the input sentence, we performed some manual analysis of the output. We observe that the improvements can broadly be attributed to two reasons: 1) the use of long phrase pairs which are missing in the phrase table, and 2) deterministically using highly reliable phrase pairs. Phrase-based SMT systems normally impose a limit on the length of phrase pairs for storage and speed considerations. Our method can overcome this limitation by retrieving and reusing long phrase pairs on the fly. A similar idea, albeit from a different perspective, was explored by (Lopez, 2008), where he proposed to construct a phrase table on the fly for each sentence to be translated. Differently from his approach, our method directly translates part of the input sentence using fuzzy matches retrieved on the fly, with the rest of the sentence translated by the pre-trained MT system. We offer some more insights into the advantages of our method by means of a few examples. Example 1 shows translation improvements by using long phrase pairs. Compared to the reference translation, we can see that for the underlined phrase, the translation without markup contains (i) word ordering errors and (ii) a missing right quotation mark. In Example 2, by specifying the translation of the final punctuation mark, the system correctly translates the relative clause ‘that you select’. The translation of this relative clause is missing when translating the input without markup. This improvement can be partly attributed to the reduction in search errors by specifying the highly reliable translations for phrases in an input sentence. 6 Conclusions and Future Work In this paper, we introduced a discriminative learning method to tightly integrate fuzzy matches retrieved using translation memory technologies with phrase-based SMT systems to improve translation consistency. We used an SVM classifier to predict whether phrase pairs derived from fuzzy matches could be used to constrain the translation of an in1246 put sentence. A number of feature functions including a series of novel dependency features were used to train the classifier. Experiments demonstrated that discriminative learning is effective in improving translation quality and is more informative than the fuzzy match score used in previous research. We report a statistically significant 0.9 absolute improvement in BLEU score using a procedure to promote translation consistency. As mentioned in Section 2, the potential improvement in sentence-level translation consistency using our method can be attributed to the fact that the translation of new input sentences is closely informed and guided (or constrained) by previously translated sentences using global features such as dependencies. However, it is worth noting that the level of gains in translation consistency is also dependent on the nature of the TM itself; a selfcontained coherent TM would facilitate consistent translations. In the future, we plan to investigate the impact of TM quality on translation consistency when using our approach. Furthermore, we will explore methods to promote translation consistency at document level. Moreover, we also plan to experiment with phrase-by-phrase classification instead of sentenceby-sentence classification presented in this paper, in order to obtain more stable classification results. We also plan to label the training examples using other sentence-level evaluation metrics such as Meteor (Banerjee and Lavie, 2005), and to incorporate features that can measure syntactic similarities in training the classifier, in the spirit of (Owczarzak et al., 2007). Currently, only a standard phrase-based SMT system is used, so we plan to test our method on a hierarchical system (Chiang, 2005) to facilitate direct comparison with (Koehn and Senellart, 2010). We will also carry out experiments on other data sets and for more language pairs. Acknowledgments This work is supported by Science Foundation Ireland (Grant No 07/CE/I1142) and part funded under FP7 of the EC within the EuroMatrix+ project (grant No 231720). The authors would like to thank the reviewers for their insightful comments and suggestions. References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65–72, Ann Arbor, MI. Ergun Bic¸ici and Marc Dymetman. 2008. Dynamic translation memory: Using statistical machine translation to improve translation memory. In Proceedings of the 9th Internation Conference on Intelligent Text Processing and Computational Linguistics (CICLing), pages 454–465, Haifa, Israel. Chih-Chung Chang and Chih-Jen Lin, 2001. LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu. tw/˜cjlin/libsvm. David Chiang. 2005. A hierarchical Phrase-Based model for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05), pages 263–270, Ann Arbor, MI. Corinna Cortes and Vladimir Vapnik. 1995. Supportvector networks. Machine learning, 20(3):273–297. Yifan He, Yanjun Ma, Josef van Genabith, and Andy Way. 2010. Bridging SMT and TM with translation recommendation. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 622–630, Uppsala, Sweden. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, volume 1, pages 181–184, Detroit, MI. Philipp Koehn and Jean Senellart. 2010. Convergence of translation memory and statistical machine translation. In Proceedings of AMTA Workshop on MT Research and the Translation Industry, pages 21–31, Denver, CO. Philipp Koehn, Franz Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Human Language Technology Conference and the North American Chapter of the Association for Computational Linguistics, pages 48–54, Edmonton, AB, Canada. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Vol1247 ume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic. Vladimir Iosifovich Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10(8):707–710. Hsuan-Tien Lin, Chih-Jen Lin, and Ruby C. Weng. 2007. A note on platt’s probabilistic outputs for support vector machines. Machine Learning, 68(3):267–276. Adam Lopez. 2008. Tera-scale translation models via pattern matching. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 505–512, Manchester, UK, August. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. WileyInterscience, New York, NY. Franz Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan. Karolina Owczarzak, Josef van Genabith, and Andy Way. 2007. Labelled dependencies in machine translation evaluation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 104–111, Prague, Czech Republic. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of Machine Translation. In 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, PA. John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, pages 61–74. Richard Sikes. 2007. Fuzzy matching in theory and practice. Multilingual, 18(6):39–43. Michel Simard and Pierre Isabelle. 2009. Phrase-based machine translation in a computer-assisted translation environment. In Proceedings of the Twelfth Machine Translation Summit (MT Summit XII), pages 120 – 127, Ottawa, Ontario, Canada. James Smith and Stephen Clark. 2009. EBMT for SMT: A new EBMT-SMT hybrid. In Proceedings of the 3rd International Workshop on Example-Based Machine Translation, pages 3–10, Dublin, Ireland. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas (AMTA-2006), pages 223–231, Cambridge, MA, USA. Lucia Specia, Craig Saunders, Marco Turchi, Zhuoran Wang, and John Shawe-Taylor. 2009. Improving the confidence of machine translation quality estimates. In Proceedings of the Twelfth Machine Translation Summit (MT Summit XII), pages 136 – 143, Ottawa, Ontario, Canada. Andreas Stolcke. 2002. SRILM – An extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing, pages 901–904, Denver, CO. Ventsislav Zhechev and Josef van Genabith. 2010. Seeding statistical machine translation with translation memory output through tree-based structural alignment. In Proceedings of the Fourth Workshop on Syntax and Structure in Statistical Translation, pages 43– 51, Beijing, China. 1248
2011
124
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1249–1257, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Machine Translation System Combination by Confusion Forest Taro Watanabe and Eiichiro Sumita National Institute of Information and Communications Technology 3-5 Hikaridai, Keihanna Science City, 619-0289 JAPAN {taro.watanabe,eiichiro.sumita}@nict.go.jp Abstract The state-of-the-art system combination method for machine translation (MT) is based on confusion networks constructed by aligning hypotheses with regard to word similarities. We introduce a novel system combination framework in which hypotheses are encoded as a confusion forest, a packed forest representing alternative trees. The forest is generated using syntactic consensus among parsed hypotheses: First, MT outputs are parsed. Second, a context free grammar is learned by extracting a set of rules that constitute the parse trees. Third, a packed forest is generated starting from the root symbol of the extracted grammar through non-terminal rewriting. The new hypothesis is produced by searching the best derivation in the forest. Experimental results on the WMT10 system combination shared task yield comparable performance to the conventional confusion network based method with smaller space. 1 Introduction System combination techniques take the advantages of consensus among multiple systems and have been widely used in fields, such as speech recognition (Fiscus, 1997; Mangu et al., 2000) or parsing (Henderson and Brill, 1999). One of the state-of-the-art system combination methods for MT is based on confusion networks, which are compact graph-based structures representing multiple hypotheses (Bangalore et al., 2001). Confusion networks are constructed based on string similarity information. First, one skeleton or backbone sentence is selected. Then, other hypotheses are aligned against the skeleton, forming a lattice with each arc representing alternative word candidates. The alignment method is either model-based (Matusov et al., 2006; He et al., 2008) in which a statistical word aligner is used to compute hypothesis alignment, or edit-based (Jayaraman and Lavie, 2005; Sim et al., 2007) in which alignment is measured by an evaluation metric, such as translation error rate (TER) (Snover et al., 2006). The new translation hypothesis is generated by selecting the best path through the network. We present a novel method for system combination which exploits the syntactic similarity of system outputs. Instead of constructing a string-based confusion network, we generate a packed forest (Billot and Lang, 1989; Mi et al., 2008) which encodes exponentially many parse trees in a polynomial space. The packed forest, or confusion forest, is constructed by merging the MT outputs with regard to their syntactic consensus. We employ a grammar-based method to generate the confusion forest: First, system outputs are parsed. Second, a set of rules are extracted from the parse trees. Third, a packed forest is generated using a variant of Earley’s algorithm (Earley, 1970) starting from the unique root symbol. New hypotheses are selected by searching the best derivation in the forest. The grammar, a set of rules, is limited to those found in the parse trees. Spurious ambiguity during the generation step is further reduced by encoding the tree local contextual information in each non-terminal symbol, such as parent and sibling labels, using the state representation in Earley’s algorithm. 1249 Experiments were carried out for the system combination task of the fifth workshop on statistical machine translation (WMT10) in four directions, {Czech, French, German, Spanish}-toEnglish (Callison-Burch et al., 2010), and we found comparable performance to the conventional confusion network based system combination in two language pairs, and statistically significant improvements in the others. First, we will review the state-of-the-art method which is a system combination framework based on confusion networks (§2). Then, we will introduce a novel system combination method based on confusion forest (§3) and present related work in consensus translations (§4). Experiments are presented in Section 5 followed by discussion and our conclusion. 2 Combination by Confusion Network The system combination framework based on confusion network starts from computing pairwise alignment between hypotheses by taking one hypothesis as a reference. Matusov et al. (2006) employs a model based approach in which a statistical word aligner, such as GIZA++ (Och and Ney, 2003), is used to align the hypotheses. Sim et al. (2007) introduced TER (Snover et al., 2006) to measure the edit-based alignment. Then, one hypothesis is selected, for example by employing a minimum Bayes risk criterion (Sim et al., 2007), as a skeleton, or a backbone, which serves as a building block for aligning the rest of the hypotheses. Other hypotheses are aligned against the skeleton using the pairwise alignment. Figure 1(b) illustrates an example of a confusion network constructed from the four hypotheses in Figure 1(a), assuming the first hypothesis is selected as our skeleton. The network consists of several arcs, each of which represents an alternative word at that position, including the empty symbol, ϵ. This pairwise alignment strategy is prone to spurious insertions and repetitions due to alignment errors such as in Figure 1(a) in which “green” in the third hypothesis is aligned with “forest” in the skeleton. Rosti et al. (2008) introduces an incremental method so that hypotheses are aligned incrementally to the growing confusion network, not only the . . ..* ..I . .saw . .the . . .forest . . . ..I . .walked . .the . .blue . .forest . . . ..I . .saw . .the . . .green . .trees . . . . . .the . . .forest . .was . .found (a) Pairwise alignment using the first starred hypothesis as a skeleton. . . . . . . . . .I .ϵ .saw .ϵ .walked .the .blue .ϵ .forest .green .trees .ϵ .was .found .ϵ (b) Confusion network from (a) . . . . . . . . .I .ϵ .saw .ϵ .walked .the .blue .green .forest .trees .was .ϵ .found .ϵ (c) Incrementally constructed confusion network Figure 1: An example confusion network construction skeleton hypothesis. In our example, “green trees” is aligned with “blue forest” in Figure 1(c). The confusion network construction is largely influenced by the skeleton selection, which determines the global word reordering of a new hypothesis. For example, the last hypothesis in Figure 1(a) has a passive voice grammatical construction while the others are active voice. This large grammatical difference may produce a longer sentence with spuriously inserted words, as in “I saw the blue trees was found” in Figure 1(c). Rosti et al. (2007b) partially resolved the problem by constructing a large network in which each hypothesis was treated as a skeleton and the multiple networks were merged into a single network. 3 Combination by Confusion Forest The confusion network approach to system combination encodes multiple hypotheses into a compact lattice structure by using word-level consensus. Likewise, we propose to encode multiple hypotheses into a confusion forest, which is a packed forest which represents multiple parse trees in a polynomial space (Billot and Lang, 1989; Mi et al., 2008) Syntactic consensus is realized by sharing tree frag1250 . . . .PRP . .. ..I . . .NP@1 . . . . . . .DT . . . . .the . . .NN . . . . .forest . . .VBD@3 . . . . .was . . .VP@4 . . . . .VBN . . . . .found. . [email protected] . . . . . . .walked . . .saw . . [email protected] . . .DT . . . . .the . ..JJ . . . . . . .blue . . .green . . .NN . . . . . . .forest . . .trees . . [email protected] . . . . .the . . [email protected] . . . . .forest . . .VP@2 . . .S@ϵ Figure 2: An example packed forest representing hypotheses in Figure 1(a). ments among parse trees. The forest is represented as a hypergraph which is exploited in parsing (Klein and Manning, 2001; Huang and Chiang, 2005) and machine translation (Chiang, 2007; Huang and Chiang, 2007). More formally, a hypergraph is a pair ⟨V, E⟩ where V is the set of nodes and E is the set of hyperedges. Each node in V is represented as X@p where X ∈N is a non-terminal symbol and p is an address (Shieber et al., 1995) that encapsulates each node id relative to its parent. The root node is given the address ϵ and the address of the first child of node p is given p.1. Each hyperedge e ∈E is represented as a pair ⟨head(e), tails(e)⟩ where head(e) ∈V is a head node and tails(e) ∈ V ∗is a list of tail nodes, corresponding to the left-hand side and the right-hand side of an instance of a rule in a CFG, respectively. Figure 2 presents an example packed forest for the parsed hypotheses in Figure 1(a). For example, VP@2 has two hyperedges, ⟨VP@2, ( VBD@3, VP@4) ⟩and ⟨VP@2, ( [email protected], [email protected]) ⟩, leading to different derivations where the former takes the grammatical construction in passive voice while the latter in active voice. Given system outputs, we employ the following grammar based approach for constructing a confusion forest: First, MT outputs are parsed. Second, Initialization: [TOP →•S, 0] : ¯1 Scan: [X →α • xβ, h] : u [X →αx • β, h] : u Predict: [X →α • Yβ, h] [Y →•γ, h + 1] : u Y u→γ ∈G, h < H Complete: [X →α • Yβ, h] : u [Y →γ•, h + 1] : v [X →αY • β, h] : u ⊗v Goal: [TOP →S•, 0] Figure 3: The deductive system for Earley’s generation algorithm a grammar is learned by treating each hyperedge as an instance of a CFG rule. Third, a forest is generated from the unique root symbol of the extracted grammar through non-terminal rewriting. 3.1 Forest Generation Given the extracted grammar, we apply a variant of Earley’s algorithm (Earley, 1970) which can generate strings in a left-to-right manner from the unique root symbol, TOP. Figure 3 presents the deductive inference rules (Goodman, 1999) for our generation algorithm. We use capital letters X ∈N to denote non-terminals and x ∈T for terminals. Lowercase Greek letters α, β and γ are strings of terminals and non-terminals (T ∪N)∗. u and v are weights associated with each item. The major difference compared to Earley’s parsing algorithm is that we ignore the terminal span information each non-terminal covers and keep track of the height of derivations by h. The scanning step will always succeed by moving the dot to the right. Combined with the prediction and completion steps, our algorithm may potentially generate a spuriously deep forest. Thus, the height of the forest is constrained in the prediction step not to exceed H, which is empirically set to 1.5 times the maximum 1251 height of the parsed system outputs. 3.2 Tree Annotation The grammar compiled from the parsed trees is local in that it can represent a finite number of sentences translated from a specific input sentence. Although its coverage is limited, our generation algorithm may yield a spuriously large forest. As a way to reduce spurious ambiguities, we relabel the nonterminal symbols assigned to each parse tree before extracting rules. Here, we replace each non-terminal symbol by the state representation of Earley’s algorithm corresponding to the sequence of prediction steps starting from TOP. Figure 4(a) presents an example parse tree with each symbol replaced by the Earley’s state in Figure 4(b). For example, the label for VBD is replaced by •S + NP : •VP + •VBD : NP which corresponds to the prediction steps of TOP →•S, S →NP • VP and VP →•VBD NP. The context represented in the Earley’s state is further limited by the vertical and horizontal Markovization (Klein and Manning, 2003). We define the vertical order v in which the label is limited to memorize only v previous prediction steps. For instance, setting v = 1 yields NP : •VP + •VBD : NP in our example. Likewise, we introduce the horizontal order h which limits the number of sibling labels memorized on the left and the right of the dotted label. Limiting h = 1 implies that each deductive step is encoded with at most three symbols. No limits in the horizontal and vertical Markovization orders implies memorizing of all the deductions and yields a confusion forest representing the union of parse trees through the grammar collection and the generation processes. More relaxed horizontal orders allow more reordering of subtrees in a confusion forest by discarding the sibling context in each prediction step. Likewise, constraining the vertical order generates a deeper forest by ignoring the sequence of symbols leading to a particular node. 3.3 Forest Rescoring From the packed forest F, new k-best derivations are extracted from all possible derivations D by efficient forest-based algorithms for k-best parsing (Huang and Chiang, 2005). We use a linear combi. . ..S . . . . . . .NP . . . . .PRP . .. ..I . . .VP . . . . . . .VBD . . . . .saw . . .NP . . . . . . .DT . . . . .the . . .NN . . . . .forest (a) A parse tree for “I saw the forest” . . ..•S . . . . . . . •S + • NP : VP . . . . . •S + • NP : VP + • PRP . . . . .I . . . •S +NP : •VP . . . . . . . •S +NP : •VP + • VBD : NP . . . . .saw . . . •S +NP : •VP +VBD : •NP . . . . . . . •S +NP : •VP +VBD : •NP + • DT : NN . . . . .the . . . •S +NP : •VP +VBD : •NP +DT : •NN . . . . .forest (b) Earley’s state annotated tree for (a). The sub-labels in boldface indicate the original labels. Figure 4: Label annotation by Earley’s alsogirhtm state nation of features as our objective function to seek for the best derivation ˆd: ˆd = arg max d∈D w⊤· h(d, F) (1) where h(d, F) is a set of feature functions scaled by weight vector w. We use cube-pruning (Chiang, 2007; Huang and Chiang, 2007) to approximately intersect with non-local features, such as n-gram language models. Then, k-best derivations are extracted from the rescored forest using algorithm 3 of Huang and Chiang (2005). 4 Related Work Consensus translations have been extensively studied with many granularities. One of the simplest forms is a sentence-based combination in which hypotheses are simply reranked without merging (Nomoto, 2004). Frederking and Nirenburg (1994) 1252 proposed a phrasal combination by merging hypotheses in a chart structure, while others depended on confusion networks, or similar structures, as a building block for merging hypotheses at the word level (Bangalore et al., 2001; Matusov et al., 2006; He et al., 2008; Jayaraman and Lavie, 2005; Sim et al., 2007). Our work is the first to explicitly exploit syntactic similarity for system combination by merging hypotheses into a syntactic packed forest. The confusion forest approach may suffer from parsing errors such as the confusion network construction influenced by alignment errors. Even with parsing errors, we can still take a tree fragment-level consensus as long as a parser is consistent in that similar syntactic mistakes would be made for similar hypotheses. Rosti et al. (2007a) describe a re-generation approach to consensus translation in which a phrasal translation table is constructed from the MT outputs aligned with an input source sentence. New translations are generated by decoding the source sentence again using the newly extracted phrase table. Our grammar-based approach can be regarded as a regeneration approach in which an off-the-shelf monolingual parser, instead of a word aligner, is used to annotate syntactic information to each hypothesis, then, a new translation is generated from the merged forest, not from the input source sentence through decoding. In terms of generation, our approach is an instance of statistical generation (Langkilde and Knight, 1998; Langkilde, 2000). Instead of generating forests from semantic representations (Langkilde, 2000), we generate forests from a CFG encoding the consensus among parsed hypotheses. Liu et al. (2009) present joint decoding in which a translation forest is constructed from two distinct MT systems, tree-to-string and string-to-string, by merging forest outputs. Their merging method is either translation-level in which no new translation is generated, or derivation-level in that the rules sharing the same left-hand-side are used in both systems. While our work is similar in that a new forest is constructed by sharing rules among systems, although their work involves no consensus translation and requires structures internal to each system such as model combinations (DeNero et al., 2010). cz-en de-en es-en fr-en # of systems 6 16 8 14 avg. words tune 10.6K 10.9K 10.9K 11.0K test 50.5K 52.1K 52.1K 52.4K sentences tune 455 test 2,034 Table 1: WMT10 system combination tuning/testing data 5 Experiments 5.1 Setup We ran our experiments for the WMT10 system combination task usinge four language pairs, {Czech, French, German, Spanish}-to-English (Callison-Burch et al., 2010). The data is summarized in Table 1. The system outputs are retokenized to match the Penn-treebank standard, parsed by the Stanford Parser (Klein and Manning, 2003), and lower-cased. We implemented our confusion forest system combination using an in-house developed hypergraph-based toolkit cicada which is motivated by generic weighted logic programming (Lopez, 2009), originally developed for a synchronous-CFG based machine translation system (Chiang, 2007). Input to our system is a collection of hypergraphs, a set of parsed hypotheses, from which rules are extracted and a new forest is generated as described in Section 3. Our baseline, also implemented in cicada, is a confusion network-based system combination method (§2) which incrementally aligns hypotheses to the growing network using TER (Rosti et al., 2008) and merges multiple networks into a large single network. After performing epsilon removal, the network is transformed into a forest by parsing with monotone rules of S →X, S →S X and X →x. k-best translations are extracted from the forest using the forest-based algorithms in Section 3.3. 5.2 Features The feature weight vector w in Equation 1 is tuned by MERT over hypergraphs (Kumar et al., 2009). We use three lower-cased 5-gram language mod1253 els hi lm(d): English Gigaword Fourth edition1, the English side of French-English 109 corpus and the news commentary English data2. The count based features ht(d) and he(d) count the number of terminals and the number of hyperedges in d, respectively. We employ M confidence measures hm s (d) for M systems, which basically count the number of rules used in d originally extracted from mth system hypothesis (Rosti et al., 2007a). Following Macherey and Och (2007), BLEU (Papineni et al., 2002) correlations are also incorporated in our system combination. Given M system outputs e1...eM, M BLEU scores are computed for d using each of the system outputs em as a reference hm b (d) = BP(e, em) · exp ( 1 4 4 ∑ n=1 log ρn(e, em) ) where e = yield(d) is a terminal yield of d, BP(·) and ρn(·) respectively denote brevity penalty and n-gram precision. Here, we use approximated unclipped n-gram counts (Dreyer et al., 2007) for computing ρn(·) with a compact state representation (Li and Khudanpur, 2009). Our baseline confusion network system has an additional penalty feature, hp(m), which is the total edits required to construct a confusion network using the mth system hypothesis as a skeleton, normalized by the number of nodes in the network (Rosti et al., 2007b). 5.3 Results Table 2 compares our confusion forest approach (CF) with different orders, a confusion network (CN) and max/min systems measured by BLEU (Papineni et al., 2002). We vary the horizontal orders, h = 1, 2, ∞with vertical orders of v = 3, 4, ∞. Systems without statistically significant differences from the best result (p < 0.05) are indicated by bold face. Setting v = ∞and h = ∞achieves comparable performance to CN. Our best results in three languages come from setting v = ∞and h = 2, which favors little reordering of phrasal structures. In general, lower horizontal and vertical order leads to lower BLEU. 1LDC catalog No. LDC2009T13 2Those data are available from http://www.statmt. org/wmt10/. language cz-en de-en es-en fr-en system min 14.09 15.62 21.79 16.79 max 23.44 24.10 29.97 29.17 CN 23.70 24.09 30.45 29.15 CFv=∞,h=∞ 24.13 24.18 30.41 29.57 CFv=∞,h=2 24.14 24.58 30.52 28.84 CFv=∞,h=1 24.01 23.91 30.46 29.32 CFv=4,h=∞ 23.93 23.57 29.88 28.71 CFv=4,h=2 23.82 22.68 29.92 28.83 CFv=4,h=1 23.77 21.42 30.10 28.32 CFv=3,h=∞ 23.38 23.34 29.81 27.34 CFv=3,h=2 23.30 23.95 30.02 28.19 CFv=3,h=1 23.23 21.43 29.27 26.53 Table 2: Translation results in lower-case BLEU. CN for confusion network and CF for confusion forest with different vertical (v) and horizontal (h) Markovization order. language cz-en de-en es-en fr-en rerank 29.40 32.32 36.83 36.59 CN 38.52 34.97 47.65 46.37 CFv=∞,h=∞ 30.51 34.07 38.69 38.94 CFv=∞,h=2 30.61 34.25 38.87 39.10 CFv=∞,h=1 31.09 34.65 39.27 39.51 CFv=4,h=∞ 30.86 34.19 39.17 39.39 CFv=4,h=2 30.96 34.32 39.35 39.57 CFv=4,h=1 31.44 34.62 39.69 39.90 CFv=3,h=∞ 31.03 34.30 39.29 39.57 CFv=3,h=2 31.25 34.97 39.61 40.00 CFv=3,h=1 31.55 34.60 39.72 39.97 Table 3: Oracle lower-case BLEU Table 3 presents oracle BLEU achievable by each combination method. The gains achievable by the CF over simple reranking are small, at most 2-3 points, indicating that small variations are encoded in confusion forests. We also observed that a lower horizontal and vertical order leads to better BLEU potentials. As briefly pointed out in Section 3.2, the higher horizontal and vertical order implies more faithfulness to the original parse trees. Introducing new tree fragments to confusion forests leads to new phrasal translations with enlarged forests, as presented in Table 4, measured by the average number 1254 lang cz-en de-en es-en fr-en CN 2,222.68 47,231.20 2,932.24 11,969.40 lattice 1,723.91 41,403.90 2,330.04 10,119.10 CFv=∞ 230.08 540.03 262.30 386.79 CFv=4 254.45 651.10 302.01 477.51 CFv=3 286.01 802.79 349.21 575.17 Table 4: Hypegraph size measured by the average number of hyperedges (h = 1 for CF). “lattice” is the average number of edges in the original CN. of hyperedges3. The larger potentials do not imply better translations, probably due to the larger search space with increased search errors. We also conjecture that syntactic variations were not captured by the n-gram like string-based features in Section 5.2, therefore resulting in BLEU loss, which will be investigated in future work. In contrast, CN has more potential for generating better translations, with the exception of the German-to-English direction, with scores that are usually 10 points better than simple sentence-wise reranking. The low potential in German should be interpreted in the light of the extremely large confusion network in Table 4. We postulate that the divergence in German hypotheses yields wrong alignments, and therefore amounts to larger networks with incorrect hypotheses. Table 4 also shows that CN produces a forest that is an order of magnitude larger than those created by CFs. Although we cannot directly relate the runtime and the number of hyperedges in CN and CFs, since the shape of the forests are different, CN requires more space to encode the hypotheses than those by CFs. Table 5 compares the average length of the minimum/maximum hypothesis that each method can produce. CN may generate shorter hypotheses, whereby CF prefers longer hypotheses as we decrease the vertical order. Large divergence is also observed for German, such as for hypergraph size. 6 Conclusion We presented a confusion forest based method for system combination in which system outputs are merged into a packed forest using their syntactic 3We measure the hypergraph size before intersecting with non-local features, like n-gram language models. language cz-en de-en es-en fr-en system avg. 24.84 25.62 25.63 25.75 CN min 11.09 3.39 12.27 7.94 max 33.69 40.65 33.22 36.27 CFv=∞min 15.97 10.88 17.67 16.62 max 35.20 47.20 35.28 37.94 CFv=4 min 15.52 10.58 17.02 15.85 max 37.11 53.67 38.56 42.64 CFv=3 min 15.15 10.34 16.54 15.30 max 39.88 68.45 42.85 49.55 Table 5: Average min/max hypothesis length producible by each method (h = 1 for CF). similarity. The forest construction is treated as a generation from a CFG compiled from the parsed outputs. Our experiments indicate comparable performance to a strong confusion network baseline with smaller space, and statistically significant gains in some language pairs. To our knowledge, this is the first work to directly introduce syntactic consensus to system combination by encoding multiple system outputs into a single forest structure. We believe that the confusion forest based approach to system combination has future exploration potential. For instance, we did not employ syntactic features in Section 5.2 which would be helpful in discriminating hypotheses in larger forests. We would also like to analyze the trade-offs, if any, between parsing errors and confusion forest constructions by controlling the parsing qualities. As an alternative to the grammar-based forest generation, we are investigating an edit distance measure for tree alignment, such as tree edit distance (Bille, 2005) which basically computes insertion/deletion/replacement of nodes in trees. Acknowledgments We would like to thank anonymous reviewers and our colleagues for helpful comments and discussion. References Srinivas Bangalore, German Bordel, and Giuseppe Riccardi. 2001. Computing consensus translation from multiple machine translation systems. In Proceedings of Automatic Speech Recognition and Understanding (ASRU), 2001, pages 351 – 354. 1255 Philip Bille. 2005. A survey on tree edit distance and related problems. Theor. Comput. Sci., 337:217–239, June. Sylvie Billot and Bernard Lang. 1989. The structure of shared forests in ambiguous parsing. In Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, pages 143–151, Vancouver, British Columbia, Canada, June. Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 17–53, Uppsala, Sweden, July. Revised August 2010. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. John DeNero, Shankar Kumar, Ciprian Chelba, and Franz Och. 2010. Model combination for machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 975–983, Los Angeles, California, June. Markus Dreyer, Keith Hall, and Sanjeev Khudanpur. 2007. Comparing reordering constraints for smt using efficient bleu oracle computation. In Proceedings of SSST, NAACL-HLT 2007 / AMTA Workshop on Syntax and Structure in Statistical Translation, pages 103– 110, Rochester, New York, April. Jay Earley. 1970. An efficient context-free parsing algorithm. Communications of the Association for Computing Machinery, 13:94–102, February. J.G. Fiscus. 1997. A post-processing system to yield reduced word error rates: Recognizer output voting error reduction (rover). In Proceedings of Automatic Speech Recognition and Understanding (ASRU), 1997, pages 347 –354, December. Robert Frederking and Sergei Nirenburg. 1994. Three heads are better than one. In Proceedings of the fourth conference on Applied natural language processing, pages 95–100, Morristown, NJ, USA. Joshua Goodman. 1999. Semiring parsing. Computational Linguistics, 25:573–605, December. Xiaodong He, Mei Yang, Jianfeng Gao, Patrick Nguyen, and Robert Moore. 2008. Indirect-HMM-based hypothesis alignment for combining outputs from machine translation systems. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 98–107, Honolulu, Hawaii, October. John C. Henderson and Eric Brill. 1999. Exploiting diversity in natural language processing: Combining parsers. In Proceedings of the Fourth Conference on Empirical Methods in Natural Language Processing, pages 187–194. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the Ninth International Workshop on Parsing Technology, pages 53–64, Vancouver, British Columbia, October. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 144–151, Prague, Czech Republic, June. Shyamsundar Jayaraman and Alon Lavie. 2005. Multiengine machine translation guided by explicit word matching. In Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, ACL ’05, pages 101–104, Morristown, NJ, USA. Dan Klein and Christopher D. Manning. 2001. Parsing and hypergraphs. In Proceedings of the Seventh International Workshop on Parsing Technologies (IWPT2001), pages 123–134. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 423–430, Sapporo, Japan, July. Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 163–171, Suntec, Singapore, August. Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics - Volume 1, ACL-36, pages 704–710, Morristown, NJ, USA. Irene Langkilde. 2000. Forest-based statistical sentence generation. In Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, pages 170–177, San Francisco, CA, USA. Zhifei Li and Sanjeev Khudanpur. 2009. Efficient extraction of oracle-best translations from hypergraphs. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 9–12, Boulder, Colorado, June. Yang Liu, Haitao Mi, Yang Feng, and Qun Liu. 2009. Joint decoding with multiple translation models. In 1256 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 576–584, Suntec, Singapore, August. Adam Lopez. 2009. Translation as weighted deduction. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 532– 540, Athens, Greece, March. Wolfgang Macherey and Franz J. Och. 2007. An empirical study on computing consensus translations from multiple machine translation systems. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 986–995, Prague, Czech Republic, June. Lidia Mangu, Eric Brill, and Andreas Stolcke. 2000. Finding consensus in speech recognition: word error minimization and other applications of confusion networks. Computer Speech & Language, 14(4):373 – 400. Evgeny Matusov, Nicola Ueffing, and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 33–40. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of ACL-08: HLT, pages 192–199, Columbus, Ohio, June. Tadashi Nomoto. 2004. Multi-engine machine translation with voted language model. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 494–501, Barcelona, Spain, July. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Antti-Veikko Rosti, Necip Fazil Ayan, Bing Xiang, Spyros Matsoukas, Richard Schwartz, and Bonnie Dorr. 2007a. Combining outputs from multiple machine translation systems. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 228– 235, Rochester, New York, April. Antti-Veikko Rosti, Spyros Matsoukas, and Richard Schwartz. 2007b. Improved word-level system combination for machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 312–319, Prague, Czech Republic, June. Antti-Veikko Rosti, Bing Zhang, Spyros Matsoukas, and Richard Schwartz. 2008. Incremental hypothesis alignment for building confusion networks with application to machine translation system combination. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 183–186, Columbus, Ohio, June. Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24(1–2):3–36, July–August. K.C. Sim, W.J. Byrne, M.J.F. Gales, H. Sahbi, and P.C. Woodland. 2007. Consensus network decoding for statistical machine translation system combination. In Proceedings of Acoustics, Speech and Signal Processing (ICASSP), 2007, volume 4, pages IV–105 –IV– 108, April. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In In Proceedings of Association for Machine Translation in the Americas, pages 223–231. 1257
2011
125
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1258–1267, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Hypothesis Mixture Decoding for Statistical Machine Translation Nan Duan, Mu Li, and Ming Zhou School of Computer Science and Technology Natural Language Computing Group Tianjin University Microsoft Research Asia Tianjin, China Beijing, China [email protected] {muli,mingzhou}@microsoft.com Abstract This paper presents hypothesis mixture decoding (HM decoding), a new decoding scheme that performs translation reconstruction using hypotheses generated by multiple translation systems. HM decoding involves two decoding stages: first, each component system decodes independently, with the explored search space kept for use in the next step; second, a new search space is constructed by composing existing hypotheses produced by all component systems using a set of rules provided by the HM decoder itself, and a new set of model independent features are used to seek the final best translation from this new search space. Few assumptions are made by our approach about the underlying component systems, enabling us to leverage SMT models based on arbitrary paradigms. We compare our approach with several related techniques, and demonstrate significant BLEU improvements in large-scale Chinese-to-English translation tasks. 1 Introduction Besides tremendous efforts on constructing more complicated and accurate models for statistical machine translation (SMT) (Och and Ney, 2004; Chiang, 2005; Galley et al., 2006; Shen et al., 2008; Chiang 2010), many researchers have concentrated on the approaches that improve translation quality using information between hypotheses from one or more SMT systems as well. System combination is built on top of the N-best outputs generated by multiple component systems (Rosti et al., 2007; He et al., 2008; Li et al., 2009b) which aligns multiple hypotheses to build confusion networks as new search spaces, and outputs the highest scoring paths as the final translations. Consensus decoding, on the other hand, can be based on either single or multiple systems: single system based methods (Kumar and Byrne, 2004; Tromble et al., 2008; DeNero et al., 2009; Kumar et al., 2009) re-rank translations produced by a single SMT model using either n-gram posteriors or expected n-gram counts. Because hypotheses generated by a single model are highly correlated, improvements obtained are usually small; recently, dedicated efforts have been made to extend it from single system to multiple systems (Li et al., 2009a; DeNero et al., 2010; Duan et al., 2010). Such methods select translations by optimizing consensus models over the combined hypotheses using all component systems’ posterior distributions. Although these two types of approaches have shown consistent improvements over the standard Maximum a Posteriori (MAP) decoding scheme, most of them are implemented as post-processing procedures over translations generated by MAP decoders. In this sense, the work of Li et al. (2009a) is different in that both partial and full hypotheses are re-ranked during the decoding phase directly using consensus between translations from different SMT systems. However, their method does not change component systems’ search spaces. This paper presents hypothesis mixture decoding (HM decoding), a new decoding scheme that performs translation reconstruction using hypotheses generated by multiple component systems. HM decoding involves two decoding stages: first, each component system decodes the source sentence independently, with the explored search space kept for use in the next step; second, a new search space is constructed by composing existing hypo1258 theses produced by all component systems using a set of rules provided by the HM decoder itself, and a new set of component model independent features are used to seek the final best translation from this new constructed search space. We evaluate by combining two SMT models with state-of-the-art performances on the NIST Chinese-to-English translation tasks. Experimental results show that our approach outperforms the best component SMT system by up to 2.11 BLEU points. Consistent improvements can be observed over several related decoding techniques as well, including word-level system combination, collaborative decoding and model combination. 2 Hypothesis Mixture Decoding 2.1 Motivation and Overview SMT models based on different paradigms have emerged in the last decade using fairly different levels of linguistic knowledge. Motivated by the success of system combination research, the key contribution of this work is to make more effective use of the extended search spaces from different SMT models in decoding phase directly, rather than just post-processing their final outputs. We first begin with a brief review of single system based SMT decoding, and then illustrate major challenges to this end. Given a source sentence , an SMT decoder seeks for a target translation that best matches as its translation by maximizing the following conditional probability: where is the feature vector that includes a set of system specific features, is the weight vector, is a derivation that can yield and is defined as a sequence of translation rule applications . Figure 1 illustrates a decoding example, in which the final translation is generated by recursively composing partial hypotheses that cover different ranges of the source sentence until the whole input sentence is fully covered, and the feature vector of the final translation is the aggregation of feature vectors of all partial hypotheses used.1 However, hypotheses generated by different SMT systems cannot be combined directly to form new translations because of two major issues: The first one is the heterogeneous structures of different SMT models. For example, a string-totree system cannot use hypotheses generated by a phrase-based system in decoding procedure, as such hypotheses are based on flat structures, which cannot provide any additional information needed in the syntactic model. The second one is the incompatible feature spaces of different SMT models. For example, even if a phrase-based system can use the lexical forms of hypotheses generated by a syntax-based system without considering syntactic structures, the feature vectors of these hypotheses still cannot be aggregated together in any trivial way, because the feature sets of SMT models based on different paradigms are usually inconsistent. To address these two issues discussed above, we propose HM decoding that performs translation reconstruction using hypotheses generated by multiple component systems. 2 Our method involves two decoding stages depicted as follows: 1. Independent decoding stage, in which each component system decodes input sentences independently based on its own model and search algorithm, and the explored search spaces (translation forests) are kept for use in the next stage. 1 There are also features independent of translation derivations, such as the language model feature. 2 In this paper, we will constrain our discussions within CKYstyle decoders, in which we find translations for all spans of the source sentence. Although standard implementations of phrase-based decoders fall out of this scope, they can be still re-written to work in the CKY-style bottom-up manner at the cost of 1) only BTG-style reordering allowed, and 2) higher time complexity. As a result, any phrase-based SMT system can be used as a component in our HM decoding method. China ’s economic growth [-2.48, 4] China [-0.36, 1] 的 中国 经济 发展 ’s [-0.69, 1] economic [-0.51, 1] growth [-0.92, 1] China ‘s [-1.05, 2] economic growth [-1.43, 2] Figure 1: A decoding example of a phrase-based SMT system. Each hypothesis is annotated with a feature vector, which includes a logarithmic probability feature and a word count feature. 1259 2. HM decoding stage, where a mixture search space is constructed for translation derivations by composing partial hypotheses generated by all component systems, and a new decoding model with a set of enriched feature functions are used to seek final translations from this newly generated search space. HM decoding can use lexicalized hypotheses of arbitrary SMT models to derive translation, and a set of component model independent features are used to compute translation confidence. We discuss mixture search space construction, details of model and feature designs as well as HM decoding algorithms in Section 2.2, 2.3 and 2.4 respectively. 2.2 Mixture Search Space Construction Let denote component MT systems, denote the span of a source sentence starting at position and ending at position . We use denoting the search space of predicted by , and denoting the mixture search space of constructed by the HM decoder, which is defined recursively as follows:  . This rule adds all component systems’ search spaces into the mixture search space for use in HM decoding. Thus hypotheses produced by all component systems are still available to the HM decoder.  , in which and . is a translation rule provided by HM decoder that composes a new hypothesis using smaller hypotheses in the search spaces . These rules further extend with hypotheses generated by the HM decoder itself. Figure 2 shows an example of HM decoding, in which hypotheses generated by two SMT systems are used together to compose new translations. Since search space pruning is the indispensable procedure for all SMT systems, we will omit its explicit expression in the following descriptions and algorithms for convenience. 2.3 Models and Features Following the common practice in SMT research, we use a linear model to formulate the preference of translation hypotheses in the mixture search space . Formally, we are to find a translation that maximizes the weighted linear combination of a set of real-valued features as follows: where is an HM decoding feature with its corresponding feature weight . In this paper, the HM decoder does not assume the availability of any internal knowledge of the underlying component systems. The HM decoding features are independent of component models as well, which fall into two categories: The first category contains a set of consensusbased features, which are inspired by the success of consensus decoding approaches. These features are described in details as follows: 1) : the n-gram posterior feature of computed based on the component search space generated by : is the posterior probability of an n-gram in , is the number of times that occurs in , equals to 1 when occurs in , and 0 otherwise. Figure 2: An example of HM decoding, in which the translations surrounded by the dotted lines are newly generated hypotheses. Hypotheses light-shaded come from a phrase-based system, and hypotheses darkshaded come from a syntax-based system. economic growth of China economic growth China ’s 的 中国 经济 发展 development of economy China ’s development of economy China ‘s economic growth of China development of economy of China … Rules provided by the HM decoder 1260 2) : the stemmed n-gram posterior feature of computed based on the stemmed component search space . A word stem dictionary that includes 22,660 entries is used to convert and into their stem forms and by replacing each word into its stem form. This feature is computed similarly to that of . 3) : the n-gram posterior feature of computed based on the mixture search space generated by the HM decoder: is the posterior probability of an n-gram in , is the posterior probability of one translation given based on . 4) : the length posterior feature of the specific target hypothesis with length based on the mixture search space generated by the HM decoder: Note here that features in and will be computed when the computations of all the remainder features in two categories have already finished for each in , and they will be used to update current HM decoding model scores. Consensus features based on component search spaces have already shown effectiveness (Kumar et al., 2009; DeNero et al., 2010; Duan et al., 2010). We leverage consensus features based on the mixture search space newly generated in HM decoding as well. The length posterior feature (Zen and Ney, 2006) is used to adjust the preference of HM decoder for longer or shorter translations, and the stemmed n-gram posterior features are used to provide more discriminative power for HM decoding and to decrease the effects of morphological changes in words for more accurate computation of consensus statistics. The second feature category contains a set of general features. Although there are more features that can be incorporated into HM decoding besides the ones we list below, we only utilize the most representative ones for convenience: 1) : the word count feature. 2) : the language model feature. 3) : the dictionary-based feature that counts how many lexicon pairs can be found in a given translation pair . 4) and : reordering features that penalize the uses of straight and inverted BTG rules during the derivation of in HM decoding. These two features are specific to BTG-based HM decoding (Section 2.4.1): 5) and : reordering features that penalize the uses of hierarchical and glue rules during the derivation of in HM decoding. These two features are specific to SCFG-based HM decoding (Section 2.4.2): is the hierarchical rule set provided by the HM decoder itself, equals to 1 when is provided by , and 0 otherwise. 6) : the feature that counts how many n-grams in are newly generated by the HM decoder, which cannot be found in all existing component search spaces: equals to 1 when does not exist in , and 0 otherwise. The MERT algorithm (Och, 2003) is used to tune weights of HM decoding features. 2.4 Decoding Algorithms Two CKY-style algorithms for HM decoding are presented in this subsection. The first one is based on BTG (Wu, 1997), and the second one is based on SCFG, similar to Chiang (2005). 1261 2.4.1 BTG-based HM Decoding The first algorithm, BTG-HMD, is presented in Algorithm 1, where hypotheses of two consecutive source spans are composed using two BTG rules:  Straight rule . It combines translations of two consecutive blocks into a single larger block in a straight order.  Inverted rule . It combines translations of two consecutive blocks into a single larger block in an inverted order. These two rules are used bottom-up until the whole source sentence is fully covered. We use two reordering rule penalty features, and , to penalize the uses of these two rules. Algorithm 1: BTG-based HM Decoding 1: for each component model do 2: output the search space for the input 3: end for 4: for to do 5: for all s.t. do 6: 7: for all s.t. do 8: for and do 9: add to 10: add to 11: end for 12: end for 13: for each hypothesis do 14: compute HM decoding features for 15: add to 16: end for 17: for each hypothesis do 18: compute the n-gram and length posterior features for based on 19: update current HM decoding score of 20: end for 21: end for 22: end for 23: return with the maximum model score In BTG-HMD, in order to derive translations for a source span , we compose hypotheses of any two smaller spans and using two BTG rules in line 9 and 10, denotes the operations that firstly combine and using one BTG rule and secondly compute HM decoding features for the newly generated hypothesis . We compute HM decoding features for hypotheses contained in all existing component search spaces as well, and add them to . From line 17 to 20, we update current HM decoding scores for all hypotheses in using the n-gram and length posterior features computed based on . When the whole source sentence is fully covered, we return the hypothesis with the maximum model score as the final best translation. 2.4.2 SCFG-based HM Decoding The second algorithm, SCFG-HMD, is presented in Algorithm 2. An additional rule set , which is provided by the HM decoder, is used to compose hypotheses. It includes hierarchical rules extracted using Chiang (2005)’s method and glue rules. Two reordering rule penalty features, and , are used to adjust the preferences of using hierarchical rules and glue rules. Algorithm 2: SCFG-based HM Decoding 1: for each component model do 2: output the search space for the input 3: end for 4: for to do 5: for all s.t. do 6: 7: for each rule that matches do 8: for and do 9: add to 10: end for 11: end for 12: for each hypothesis do 13: compute HM decoding features for 14: add to 15: end for 16: for each hypothesis do 17: compute the n-gram and length posterior features for based on 18: update current HM decoding score of 19: end for 20: end for 21: end for 22: return with the maximum model score Compared to BTG-HMD, the key differences in SCFG-HMD are located from line 7 to 11, where the translation for a given span is generated by replacing the non-terminals in a hierarchical rule with their corresponding target translations, is the source span that is covered by the th nonterminal of , is the search space for predicted by the HM decoder. 1262 3 Comparisons to Related Techniques 3.1 Model Combination and Mixture Model based MBR Decoding Model combination (DeNero et al., 2010) is an approach that selects translations from a conjoint search space using information from multiple SMT component models; Duan et al. (2010) presents a similar method, which utilizes a mixture model to combine distributions of hypotheses from different systems for Bayes-risk computation, and selects final translations from the combined search spaces using MBR decoding. Both of these two methods share a common limitation: they only re-rank the combined search space, without the capability to generate new translations. In contrast, by reusing hypotheses generated by all component systems in HM decoding, translations beyond any existing search space can be generated. 3.2 Co-Decoding and Joint Decoding Li et al. (2009a) proposes collaborative decoding, an approach that combines translation systems by re-ranking partial and full translations iteratively using n-gram features from the predictions of other member systems. However, in co-decoding, all member systems must work in a synchronous way, and hypotheses between different systems cannot be shared during decoding procedure; Liu et al. (2009) proposes joint-decoding, in which multiple SMT models are combined in either translation or derivation levels. However, their method relies on the correspondence between nodes in hypergraph outputs of different models. HM decoding, on the other hand, can use hypotheses from component search spaces directly without any restriction. 3.3 Hybrid Decoding Hybrid decoding (Cui et al., 2010) resembles our approach in the motivation. This method uses the system combination technique in decoding directly to combine partial hypotheses from different SMT models. However, confusion network construction brings high computational complexity. What’s more, partial hypotheses generated by confusion network decoding cannot be assigned exact feature values for future use in higher level decoding, and they only use feature values of 1-best hypothesis as an approximation. HM decoding, on the other hand, leverages a set of enriched features, which are computable for all the hypotheses generated by either component systems or the HM decoder. 4 Experiments 4.1 Data and Metric Experiments are conducted on the NIST Chineseto-English MT tasks. The NIST 2004 (MT04) data set is used as the development set, and evaluation results are reported on the NIST 2005 (MT05), the newswire portions of the NIST 2006 (MT06) and 2008 (MT08) data sets. All bilingual corpora available for the NIST 2008 constrained data track of Chinese-to-English MT task are used as training data, which contain 5.1M sentence pairs, 128M Chinese words and 147M English words after preprocessing. Word alignments are performed using GIZA++ with the intersect-diag-grow refinement. The English side of bilingual corpus plus Xinhua portion of the LDC English Gigaword Version 3.0 are used to train a 5-gram language model. Translation performance is measured in terms of case-insensitive BLEU scores (Papineni et al., 2002), which compute the brevity penalty using the shortest reference translation for each segment. Statistical significance is computed using the bootstrap re-sampling approach proposed by Koehn (2004). Table 1 gives some data statistics. Data Set #Sentence #Word MT04(dev) 1,788 48,215 MT05 1,082 29,263 MT06 616 17,316 MT08 691 17,424 Table 1: Statistics on dev and test data sets 4.2 Component Systems For convenience of comparing HM decoding with several related decoding techniques, we include two state-of-the-art SMT systems as component systems only:  PB. A phrase-based system (Xiong et al., 2006) with one lexicalized reordering model based on the maximum entropy principle.  DHPB. A string-to-dependency tree-based system (Shen et al., 2008), which translates source strings to target dependency trees. A target dependency language model is used as an additional feature. 1263 Phrasal rules are extracted on all bilingual data, hierarchical rules used in DHPB and reordering rules used in SCFG-HMD are extracted from a selected data set3. Reordering model used in PB is trained on the same selected data set as well. A trigram dependency language model used in DHPB is trained with the outputs from Berkeley parser on all language model training data. 4.3 Contrastive Techniques We compare HM decoding with three multiplesystem based decoding techniques:  Word-Level System Combination (SC). We re-implement an IHMM alignment based system combination method proposed by Li et al. (2009b). The setting of the N-best candidates used is the same as the original paper.  Co-decoding (CD). We re-implement it based on Li et al. (2009a), with the only difference that only two models are included in our reimplementation, instead of three in theirs. For each test set, co-decoding outputs three results, two for two member systems, and one for the further system combination.  Model Combination (MC). Different from codecoding, MC produces single one output for each input sentence. We re-implement this method based on DeNero et al. (2010) with two component models included. 4.4 Comparison to Component Systems We compared HM decoding with two component SMT systems first (in Table 2). 30 features are used to annotate each hypothesis in HM decoding, including: 8 n-gram posterior features computed from PB/DHPB forests for ; 8 stemmed n-gram posterior features computed from stemmed PB/DHPB forests for ; 4 n-gram posterior features and 1 length posterior feature computed from the mixture search space of HM decoder for ; 1 LM feature; 1 word count feature; 1 dictionary-based feature; 2 grammarspecified rule penalty features for either BTGHMD or SCFG-HMD; 4 count features for newly generated n-grams in HM decoding for . All n-gram posteriors are computed using the efficient algorithm proposed by Kumar et al. (2009). 3 LDC2003E07, LDC2003E14, LDC2005T06, LDC2005T10, LDC2005E83, LDC2006E26, LDC2006E34, LDC2006E85 and LDC2006E92 Model BLEU% MT04 MT05 MT06 MT08 PB 38.93 38.21 33.59 29.62 DHPB 39.90 39.76 35.00 30.43 BTG-HMD 41.24* 41.26* 36.76* 31.69* SCFG-HMD 41.31* 41.19* 36.63* 31.52* Table 2: HM decoding vs. single component system decoding (*: significantly better than each component system with < 0.01) From table 2 we can see, both BTG-HMD and SCFG-HMD outperform decoding results of the best component system (DHPB) with significant improvements: +1.50, +1.76, and +1.26 BLEU points on MT05, MT06, and MT08 for BTG-HMD; +1.43, +1.63 and +1.09 BLEU points on MT05, MT06, and MT08 for SCFG-HMD. We also notice that BTG-HMD performs slight better than SCFGHMD on test sets. We think the potential reason is that more reordering rules are used in SCFG-HMD to handle phrase movements than BTG-HMD do; however, current HM decoding model lacks the ability to distinguish the qualities of different rules. We also investigate on the effects of different HM-decoding features. For the convenience of comparison, we divide them into five categories:  Set-1. 8 n-gram posterior features based on 2 component search spaces plus 3 commonly used features (1 LM feature, 1 word count feature and 1 dictionary-based feature).  Set-2. 8 stemmed n-gram posterior features based on 2 stemmed component search spaces.  Set-3. 4 n-gram posterior features and 1 length posterior feature based on the mixture search space of the HM decoder.  Set-4. 2 grammar-specified reordering rule penalty features.  Set-5. 4 count features for unseen n-grams generated by HM decoder itself. Except for the dictionary-based feature, all the features contained in Set-1 are used by the latest multiple-system based consensus decoding techniques (DeNero et al., 2010; Duan et al., 2010). We use them as the starting point. Each time, we add one more feature set and describe the changes of performances by drawing two curves for each HM decoding algorithm on MT08 in Figure 3. 1264 Figure 3: Effects of using different sets of HM decoding features on MT08 With Set-1 used only, HM-decoding has already outperformed the best component system, which shows the strong contributions of these features as proved in related work; small gains (+0.2 BLEU points) are achieved by using 8 stemmed n-gram posterior features in Set-2, which shows consensus statistics based on n-grams in their stem forms are also helpful; n-gram and length posterior features based on mixture search space bring improvements as well; reordering rule penalty features and count features for unseen n-grams boost newly generated hypotheses specific for HM decoding, and they contribute to the overall improvements. 4.5 Comparison to System Combination Word-level system combination is state-of-the-art method to improve translation performance using outputs generated by multiple SMT systems. In this paper, we compare our HM decoding with the combination method proposed by Li et al. (2009b). Evaluation results are shown in Table 3. Model BLEU% MT04 MT05 MT06 MT08 SC 41.14 40.70 36.04 31.16 BTG-HMD 41.24 41.26+ 36.76+ 31.69+ SCFG-HMD 41.31+ 41.19+ 36.63+ 31.52+ Table 3: HM decoding vs. system combination (+: significantly better than SC with < 0.05) Compared to word-level system combination, both BTG-HMD and SCFG-HMD can provide significant improvements. We think the potential reason for these improvements is that, system combination can only use a small portion of the component systems’ search spaces; HM decoding, on the other hand, can make full use of the entire translation spaces of all component systems. 4.6 Comparison to Consensus Decoding Consensus decoding is another decoding technique that motivates our approach. We compare our HM decoding with two latest multiple-system based consensus decoding approaches, co-decoding and model combination. We list the comparison results in Table 4, in which CD-PB and CD-DHPB denote the translation results of two member systems in co-decoding respectively, CD-Comb denotes the results of further combination using outputs of CD-PB and CD-DHPB, MC denotes the results of model combination. Model BLEU% MT04 MT05 MT06 MT08 CD-PB 40.39 40.34 35.20 30.39 CD-DHPB 40.81 40.56 35.73 30.87 CD-Comb 41.27 41.02 36.37 31.54 MC 41.19 40.96 36.30 31.43 BTG-HMD 41.24 41.26+ 36.76+ 31.69 SCFG-HMD 41.31 41.19 36.63+ 31.52 Table 4: HM decoding vs. consensus decoding (+: significantly better than the best result of consensus decoding methods with < 0.05) Table 4 shows that after an additional system combination procedure, CD-Comb performs slight better than MC. Both BTG-HMD and SCFGHMD perform consistent better than CD and MC on all blind test sets, due to its richer generative capability and usage of larger search spaces. 4.7 System Combination over BTG-HMD and SCFG-HMD Outputs As BTG-HMD and SCFG-HMD are based on two different decoding grammars, we could perform system combination over the outputs of these two settings (SCBTG+SCFG) for further improvements as well, just as Li et al. (2009a) did in co-decoding. We present evaluation results in Table 5. Model BLEU% MT04 MT05 MT06 MT08 BTG-HMD 41.24 41.26 36.76 31.69 SCFG-HMD 41.31 41.19 36.63 31.52 SCBTG+SCFG 41.74+ 41.53+ 37.11+ 32.06+ Table 5: System combination based on the outputs of BTG-HMD and SCFG-HMD (+: significantly better than the best HM decoding algorithm (SCFG-HMD) with < 0.05) 30.5 30.7 30.9 31.1 31.3 31.5 31.7 31.9 Set-1 Set-2 Set-3 Set-4 Set-5 BTG-HMD SCFG-HMD 1265 After system combination, translation results are significantly better than all decoding approaches investigated in this paper: up to 2.11 BLEU points over the best component system (DHPB), up to 1.07 BLEU points over system combination, up to 0.74 BLEU points over co-decoding, and up to 0.81 BLEU points over model combination. 4.8 Evaluation of Oracle Translations In the last part, we evaluate the quality of oracle translations on the n-best lists generated by HM decoding and all decoding approaches discussed in this paper. Oracle performances are obtained using the metric of sentence-level BLEU score proposed by Ye et al. (2007), and each decoding approach outputs its 1000-best hypotheses, which are used to extract oracle translations. Model BLEU% MT04 MT05 MT06 MT08 PB 49.53 48.36 43.69 39.39 DHPB 50.66 49.59 44.68 40.47 SC 51.77 50.84 46.87 42.11 CD-PB 50.26 50.10 45.65 40.52 CD-DHPB 51.91 50.61 46.23 41.01 CD-Comb 52.10 51.00 46.95 42.20 MC 52.03 51.22 46.60 42.23 BTG-HMD 52.69+ 51.75+ 47.08 42.71+ SCFG-HMD 52.94+ 51.40 47.27+ 42.45+ SCBTG+SCFG 53.58+ 52.03+ 47.90+ 43.07+ Table 6: Oracle performances of different methods (+: significantly better than the best multiple-system based decoding method (CD-Comb) with < 0.05) Results are shown in Table 6: compared to each single component system, decoding methods based on multiple SMT systems can provide significant improvements on oracle translations; word-level system combination, collaborative decoding and model combination show similar performances, in which CD-Comb performs best; BTG-HMD, SCFG-HMD and SCBTG+SCFG can obtain significant improvements than all the other approaches, and SCBTG+SCFG performs best on all evaluation sets. 5 Conclusion In this paper, we have presented the hypothesis mixture decoding approach to combine multiple SMT models, in which hypotheses generated by multiple component systems are used to compose new translations. HM decoding method integrates the advantages of both system combination and consensus decoding techniques into a unified framework. Experimental results across different NIST Chinese-to-English MT evaluation data sets have validated the effectiveness of our approach. In the future, we will include more SMT models and explore more features, such as syntax-based features, helping to improve the performance of HM decoding. We also plan to investigate more complicated reordering models in HM decoding. References David Chiang. 2005. A Hierarchical Phrase-based Model for Statistical Machine Translation. In Proceedings of the Association for Computational Linguistics, pages 263-270. David Chiang. 2010. Learning to Translate with Source and Target Syntax. In Proceedings of the Association for Computational Linguistics, pages 1443-1452. Lei Cui, Dongdong Zhang, Mu Li, Ming Zhou, and Tiejun Zhao. 2010. Hybrid Decoding: Decoding with Partial Hypotheses Combination over Multiple SMT Systems. In Proceedings of the International Conference on Computational Linguistics, pages 214-222. John DeNero, David Chiang, and Kevin Knight. 2009. Fast Consensus Decoding over Translation Forests. In Proceedings of the Association for Computational Linguistics, pages 567-575. John DeNero, Shankar Kumar, Ciprian Chelba and Franz Och. 2010. Model Combination for Machine Translation. In Proceedings of the North American Association for Computational Linguistics, pages 975-983. Nan Duan, Mu Li, Dongdong Zhang, and Ming Zhou. 2010. Mixture Model-based Minimum Bayes Risk Decoding using Multiple Machine Translation Systems. In Proceedings of the International Conference on Computational Linguistics, pages 313-321. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable Inference and Training of Context-Rich Syntactic Translation Models. In Proceedings of the Association for Computational Linguistics, pages 961-968. Xiaodong He, Mei Yang, Jianfeng Gao, Patrick Nguyen, and Robert Moore. 2008. Indirect-HMMbased Hypothesis Alignment for Combining Outputs from Machine Translation Systems. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, pages 98-107. 1266 Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, pages 388-395. Shankar Kumar and William Byrne. 2004. Minimum Bayes-Risk Decoding for Statistical Machine Translation. In Proceedings of the North American Association for Computational Linguistics, pages 169176. Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient Minimum Error Rate Training and Minimum Bayes-Risk Decoding for Translation Hypergraphs and Lattices. In Proceedings of the Association for Computational Linguistics, pages 163-171. Mu Li, Nan Duan, Dongdong Zhang, Chi-Ho Li, and Ming Zhou. 2009a. Collaborative Decoding: Partial Hypothesis Re-Ranking Using Translation Consensus between Decoders. In Proceedings of the Association for Computational Linguistics, pages 585-592. Chi-Ho Li, Xiaodong He, Yupeng Liu, and Ning Xi. 2009b. Incremental HMM Alignment for MT system Combination. In Proceedings of the Association for Computational Linguistics, pages 949-957. Yang Liu, Haitao Mi, Yang Feng, and Qun Liu. 2009. Joint Decoding with Multiple Translation Models. In Proceedings of the Association for Computational Linguistics, pages 576-584. Franz Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the Association for Computational Linguistics, pages 160-167. Franz Och and Hermann Ney. 2004. The Alignment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4): 417-449. Kishore Papineni, Salim Roukos, Todd Ward, and Weijing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the Association for Computational Linguistics, pages 311-318. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model. In Proceedings of the Association for Computational Linguistics, pages 577-585. Antti-Veikko Rosti, Spyros Matsoukas, and Richard Schwartz. 2007. Improved Word-Level System Combination for Machine Translation. In Proceedings of the Association for Computational Linguistics, pages 312-319. Roy Tromble, Shankar Kumar, Franz Och, and Wolfgang Macherey. 2008. Lattice Minimum Bayes-Risk Decoding for Statistical Machine Translation. In Proceedings of the Conference on Empirical Methods on Natural Language Processing, pages 620629. Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3): 377-404. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum Entropy based Phrase Reordering Model for Statistical Machine Translation. In Proceedings of the Association for Computational Linguistics, pages 521-528. Yang Ye, Ming Zhou, and Chin-Yew Lin. 2007. Sentence Level Machine Translation Evaluation as a Ranking Problem: one step aside from BLEU. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 240-247. 1267
2011
126
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1268–1277, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Minimum Bayes-risk System Combination Jes´us Gonz´alez-Rubio Instituto Tecnol´ogico de Inform´atica U. Polit`ecnica de Val`encia 46022 Valencia, Spain [email protected] Alfons Juan Francisco Casacuberta D. de Sistemas Inform´aticos y Computaci´on U. Polit`ecnica de Val`encia 46022 Valencia, Spain {ajuan,fcn}@dsic.upv.es Abstract We present minimum Bayes-risk system combination, a method that integrates consensus decoding and system combination into a unified multi-system minimum Bayes-risk (MBR) technique. Unlike other MBR methods that re-rank translations of a single SMT system, MBR system combination uses the MBR decision rule and a linear combination of the component systems’ probability distributions to search for the minimum risk translation among all the finite-length strings over the output vocabulary. We introduce expected BLEU, an approximation to the BLEU score that allows to efficiently apply MBR in these conditions. MBR system combination is a general method that is independent of specific SMT models, enabling us to combine systems with heterogeneous structure. Experiments show that our approach bring significant improvements to single-system-based MBR decoding and achieves comparable results to different state-of-the-art system combination methods. 1 Introduction Once statistical models are trained, a decoding approach determines what translations are finally selected. Two parallel lines of research have shown consistent improvements over the max–derivation decoding objective, which selects the highest probability derivation. Consensus decoding procedures select translations for a single system with a minimum Bayes risk (MBR) (Kumar and Byrne, 2004). System combination procedures, on the other hand, generate translations from the output of multiple component systems by combining the best fragments of these outputs (Frederking and Nirenburg, 1994). In this paper, we present minimum Bayes risk system combination, a technique that unifies these two approaches by learning a consensus translation over multiple underlying component systems. MBR system combination operates directly on the outputs of the component models. We perform an MBR decoding using a linear combination of the component models’ probability distributions. Instead of re-ranking the translations provided by the component systems, we search for the hypothesis with the minimum expected translation error among all the possible finite-length strings in the target language. By using a loss function based on BLEU (Papineni et al., 2002), we avoid the hypothesis alignment problem that is central to standard system combination approaches (Rosti et al., 2007). MBR system combination assumes only that each translation model can produce expectations of n-gram counts; the latent derivation structures of the component systems can differ arbitrary. This flexibility allows us to combine a great variety of SMT systems. The key contributions of this paper are three: the usage of a linear combination of distributions within the MBR decoding, which allows multiple SMT models to be involved in, and makes the computation of n-grams statistics to be more accurate; the decoding in an extended search space, which allows to find better hypotheses than the evidences provided by the component models; and the use of an expected BLEU score instead of the sentence-wise BLEU, which allows to efficiently apply MBR decoding in the huge search space under consideration. We evaluate in a multi-source translation task obtaining improvements of up to +2.0 BLEU abs. over the best single system max-derivation, and state-ofthe-art performance in the system combination task of the ACL 2010 workshop on SMT. 1268 2 Related Work MBR system combination is a multi-system generalization of MBR decoding where the space of hypotheses is not constrained to the space of evidences. We expand the space of hypotheses following some underlying ideas of system combination techniques. 2.1 Minimum Bayes risk In SMT, MBR decoding allows to minimize the loss of the output for a single translation system. MBR is generally implemented by re-ranking an Nbest list of translations produced by a first pass decoder (Kumar and Byrne, 2004). Different techniques to widen the search space have been described (Tromble et al., 2008; DeNero et al., 2009; Kumar et al., 2009; Li et al., 2009). These works extend the traditional MBR algorithms based on Nbest lists to work with lattices. The use of MBR to combine the outputs of various MT systems has also been explored previously. Duan et al. (2010) present an MBR decoding that makes use of a mixture of different SMT systems to improve translation accuracy. Our technique differs in that we use a linear combination instead of a mixture, which avoids the problem of component systems not sharing the same search space; perform the decoding in a search space larger than the outputs of the component models; and optimize an expected BLEU score instead of the linear approximation to it described in (Tromble et al., 2008). DeNero et al. (2010) present model combination, a multi-system lattice MBR decoding on the conjoined evidences spaces of the component systems. Our technique differs in that we perform the search in an extended search space not restricted to the provided evidences, have fewer parameters to learn, and optimizes an expected BLEU score instead of the linear BLEU approximation. Another MBR-related technique to combine the outputs of various MT systems was presented by Gonz´alez-Rubio and Casacuberta (2010). They use different median string (Fu, 1982) algorithms to combine various machine translation systems. Our approach differs in that we take into account the posterior distribution over translations instead of considering each translation equally likely, optimize the expected BLEU score instead of a sentence-wise measure such as the edit distance or the sentencelevel BLEU, and take into account the quality differences by associating a tunable scaling factor to each system. 2.2 System Combination System combination techniques in MT take as input the outputs {e1, · · · , eN} of N translation systems, where en is a structured translation object (or N-best lists thereof), typically viewed as a sequence of words. The dominant approach in the field chooses a primary translation ep as a backbone, then finds an alignment an to the backbone for each en. A new search space is constructed from these backbone-aligned outputs and then a voting procedure of feature-based model predicts a final consensus translation (Rosti et al., 2007). MBR system combination entirely avoids this alignment problem by considering hypotheses as n-gram occurrence vectors rather than word sequences. MBR system combination performs the decoding in a larger search space and includes statistics from the components’ posteriors, whereas system combination techniques typically do not. Despite these advantages, system combination may be more appropriate in some settings. In particular, MBR system combination is designed primarily for statistical systems that generate N-best or lattice outputs. MBR system combination can integrate non-statistical systems that generate either a single or an unweighted output. However, we would not expect the same strong performance from MBR system combination in these constrained settings. 3 Minimum Bayes risk Decoding MBR decoding aims to find the candidate hypothesis that has the least expected loss under a probability model (Bickel and Doksum, 1977). We begin with a review of MBR for SMT. SMT can be described as a mapping of a word sequence f in a source language to a word sequence e in a target language; this mapping is produced by the MT decoder D(f). If the reference translation e is known, the decoder performance can be measured by the loss function L(e, D(f)). Given such a loss function L(e, e′) between an automatic translation e′ and a reference e, and an underlying proba1269 bility model P(e|f), MBR decoding has the following form (Goel and Byrne, 2000; Kumar and Byrne, 2004): ˆe = arg min e′∈E R(e′) (1) = arg min e′∈E X e∈E P(e|f) · L(e, e′) , (2) where R(e′) denotes the Bayes risk of candidate translation e′ under loss function L, and E represents the space of translations. If the loss function between any two hypotheses can be bounded: L(e, e′) ≤Lmax, the MBR decoder can be rewritten in term of a similarity function S(e, e′) = Lmax −L(e, e′). In this case, instead of minimizing the Bayes risk, we maximize the Bayes gain G(e′): ˆe = arg max e′∈E G(e′) (3) = arg max e′∈E X e∈E P(e|f) · S(e, e′) . (4) MBR decoding can use different spaces for hypothesis selection and gain computation (arg max and summatory in Eq. (4)). Therefore, the MBR decoder can be more generally written as follows: ˆe = arg max e′∈Eh X e∈Ee P(e|f) · S(e, e′) , (5) where Eh refers to the hypotheses space form where the translations are chosen and Ee refers to the evidences space that is used to compute the Bayes gain. We will investigate the expansion of the hypotheses space while keeping the evidences space as provided by the decoder. 4 MBR System Combination MBR system combination is a multi-system generalization of MBR decoding. It uses the MBR decision rule on a linear combination of the probability distributions of the component systems. Unlike existing MBR decoding methods that re-rank translation outputs, MBR system combination search for the minimum risk hypotheses on the complete set of finite-length hypotheses over the output vocabulary. We assume the component systems to be statistically independent and define the Bayes gain as a linear combination of the Bayes gains of the components. Each system provides its own space of evidences Dn(f) and its posterior distribution over translations Pn(e|f). Given a sentence f in the source language, MBR system combination is written as follows: ˆe = arg max e′∈Eh G(e′) (6) ≈arg max e′∈Eh N X n=1 αn · Gn(e′) (7) = arg max e′∈Eh N X n=1 αn · X e∈Dn(f) Pn(e|f) · S(e, e′) , (8) where N is the total number of component systems, Eh represents the hypotheses space where the search is performed, Gn(e′) is the Bayes gain of hypothesis e′ given by the nth component system and αn is a scaling factor introduced to take into account the differences in quality of the component models. It is worth mentioning that by using a linear combination instead of a mixture model, we avoid the problem of component systems not sharing the same search space (Duan et al., 2010). MBR system combination parameters training and decoding in the extended hypotheses space are described below. 4.1 Model Training We learn the scaling factors in Eq. (8) using minimum error rate training (MERT) (Och, 2003). MERT maximizes the translation quality of ˆe on a held-out set, according to an evaluation metric that compares to a reference set. We used BLEU, choosing the scaling factors to maximize BLEU score of the set of translations predicted by MBR system combination. We perform the maximization by means of the down-hill simplex algorithm (Nelder and Mead, 1965). 4.2 Model Decoding In most MBR algorithms, the hypotheses space is equal to the evidences space. Following the underlying idea of system combination, we are interested in extend the hypotheses space by including new sentences created using fragments of the hypotheses in the evidences spaces of the component models. We perform the search (argmax operation in Eq. (8)) 1270 Algorithm 1 MBR system combination decoding. Require: Initial hypothesis e Require: Vocabulary the evidences Σ 1: ˆe ←e 2: repeat 3: ecur ←ˆe 4: for j = 1 to |ecur| do 5: ˆes ←ecur 6: for a ∈Σ do 7: e′ s ←Substitute(ecur, a, j) 8: if G(e′ s) > G(ˆes) then 9: ˆes ←e′ s 10: ˆed ←Delete(ecur, j) 11: ˆei ←ecur 12: for a ∈Σ do 13: e′ i ←Insert(ecur, a, j) 14: if G(e′ i) > G(ˆei) then 15: ˆei ←e′ i 16: ˆe ←arg maxe′∈{ecur,ˆes,ˆed,ˆei} G(e′) 17: until G(ˆe) ̸> G(ecur) 18: return ecur Ensure: G(ecur) ≥G(e) using the approximate median string (AMS) algorithm (Mart´ınez et al., 2000). AMS algorithm perform a search on a hypotheses space equal to the free monoid Σ∗of the vocabulary of the evidences Σ = V oc(Ee). The AMS algorithm is shown in Algorithm 1. AMS starts with an initial hypothesis e that is modified using edit operations until there is no improvement in the Bayes gain (Lines 3–16). On each position j of the current solution ecur, we apply all the possible single edit operations: substitution of the jth word of ecur by each word a in the vocabulary (Lines 5–9), deletion of the jth word of ecur (Line 10) and insertion of each word a in the vocabulary in the jth position of ecur (Lines 11–15). If the Bayes gain of any of the new edited hypotheses is higher than the Bayes gain of the current hypothesis (Line 17), we repeat the loop with this new hypotheses ˆe, in other case, we return the current hypothesis. AMS algorithm takes as input an initial hypothesis e and the combined vocabulary of the evidences spaces Σ. Its output is a possibly new hypothesis whose Bayes gain is assured to be higher or equal than the Bayes gain of the initial hypothesis. The complexity of the main loop (lines 2-17) is O(|ecur| · |Σ| · CG), where CG is the cost of computing the gain of a hypothesis, and usually only a moderate number of iterations (< 10) is needed to converge (Mart´ınez et al., 2000). 5 Computing BLEU-based Gain We are interested in performing MBR system combination under BLEU. BLEU behaves as a score function: its value ranges between 0 and 1 and a larger value reflects a higher similarity. Therefore, we rewrite the gain function G(·) using single evidence (or reference) BLEU (Papineni et al., 2002) as the similarity function: Gn(e′) = X e∈Dn(f) Pn(e|f) · BLEU(e, e′) (9) BLEU = 4 Y k=1 mk ck  1 4 · min  e1−r c , 1.0  , (10) where r is the length of the evidence, c the length of the hypothesis, mk the number of n-gram matches of size k, and ck the count of n-grams of size k in the hypothesis. The evidences space Dn(f) may contain a huge number of hypotheses1 which often make impractical to compute Eq. (9) directly. To avoid this problem, Tromble et al. (2008) propose linear BLEU, an approximation to the BLEU score to efficiently perform MBR decoding when the search space is represented with lattices. However, our hypotheses space is the full set of finite-length strings in the target vocabulary and can not be represented in a lattice. In Eq. (9), we have one hypothesis e′ that is to be compared to a set of evidences e ∈Dn(f) which follow a probability distribution Pn(e|f). Instead of computing the expected BLEU score by calculating the BLEU score with respect to each of the evidences, our approach will be to use the expected n-gram counts and sentence length of the evidences to compute a single-reference BLEU score. We replace the reference statistics (r and mn in Eq. (10)) by the expected statistics (r′ and m′ n) given the pos1For example, in a lattice the number of hypotheses may be exponential in the size of its state set. 1271 terior distribution Pn(e|f) over the evidences: Gn(e′) = 4 Y k=1 m′ k ck  1 4 · min  e1−r′ c , 1.0  (11) r′ = X e∈Dn(f) |e| · Pn(e|f) (12) m′ k = X ng∈Nk(e′) min(Ce′(ng), C′(ng)) (13) C′(ng) = X e∈Dn(f) Ce(ng) · Pn(e|f) , (14) where Nk(e′) is the set of n-grams of size k in the hypothesis, Ce′(ng) is the count of the n-gram ng in the hypothesis and C′(ng) is the expected count of ng in the evidences. To compute the n-gram matchings m′ k, the count of each n-gram is truncated, if necessary, to not exceed the expected count for that n-gram in the evidences. We have replaced a summation over a possibly exponential number of items (e′ ∈Dn(f) in Eq. (9)) with a summation over a polynomial number of ngrams that occur in the evidences2. Both, the expected length of the evidences r′ and their expected n-gram counts m′ k can be pre-computed efficiently from N-best lists and translation lattices (Kumar et al., 2009; DeNero et al., 2010). 6 Experiments We report results on a multi-source translation task. From the Europarl corpus released for the ACL 2006 workshop on MT (WMT2006), we select those sentence pairs from the German–English (de–en), Spanish–English (es–en) and French– English (fr–en) sub-corpora that share the same English translation. We obtain a multi-source corpus with German, Spanish and French as source languages and English as target language. All the experiments were carried out with the lowercased and tokenized version of this corpus. We report results using BLEU (Papineni et al., 2002) and translation edit rate (Snover et al., 2006) (TER). We measure statistical significance using 2If Dn(f) is represented by a lattice, the number of n-grams is polynomial in the number of edges in the lattice. System dev test BLEU TER BLEU TER de→en MAX 25.3 60.5 25.6∗ 60.3 MBR 25.1 60.7 25.4∗ 60.5 es→en MAX 30.9∗ 53.3∗ 30.4∗ 53.9∗ MBR 31.0∗ 53.4∗ 30.4∗ 54.0∗ fr→en MAX 30.7∗ 53.9∗ 30.8∗ 53.4∗ MBR 30.7∗ 53.8∗ 30.9∗ 53.4∗ Table 1: Performance of base systems. Approach dev test BLEU TER BLEU TER Best MAX 30.9∗ 53.3∗ 30.8∗ 53.4∗ Best MBR 31.0∗ 53.4∗ 30.9∗ 53.4∗ MBR-SC 32.3 52.5 32.8 52.3 Table 2: Performance from best single system maxderivation decoding (Best MAX), the best single system minimum Bayes risk decoding (Best MBR) and minimum Bayes risk system combination (MBR-SC) combining three systems. 95% confidence intervals computed using paired bootstrap re-sampling (Zhang and Vogel, 2004). In all table cells (except for Table 3) systems without statistically significant differences are marked with the same superscript. 6.1 Base Systems We combine outputs from three systems, each one translating from one source language (German, Spanish or French) into English. Each individual system is a phrase-based system trained using the Moses toolkit (Koehn et al., 2007). The parameters of the systems were tuned using MERT (Och, 2003) to optimize BLEU on the development set. Each base system yields state-of-the-art performance, summarized in Table 1. For each system, we report the performance of max-derivation decoding (MAX) and 1000-best3 MBR decoding (Kumar and Byrne, 2004). 6.2 Experimental Results Table 2 compares MBR system combination (MBRSC) to the best MAX and MBR systems. Both Best 3Ehling et al. (2007) studied up to 10000-best and show that the use of 1000-best candidates is sufficient for MBR decoding. 1272 Setup BLEU TER Best MBR 30.9 53.4 MBR-SC Expected 30.9 53.5 MBR-SC E/Conjoin 32.4 52.1 MBR-SC E/C/evidences-best 30.9 53.5 MBR-SC E/C/hypotheses-best 31.8 52.5 MBR-SC E/C/Extended 32.7 52.3 MBR-SC E/C/Ex/MERT 32.8 52.3 Table 3: Results on the test set for different setups of minimum Bayes risk system combination. MBR and MBR-SC were computed on 1000-best lists. MBR-SC uses expected BLEU as gain function using the conjoined evidences spaces of the three systems to compute expected BLEU statistics. It performs the search in the free monoid of the output vocabulary, and its model parameters were tuned using MERT on the development set. This is the standard setup for MBR system combination, and we refer to it as MBR-SC-E/C/Ex/MERT in Table 3. MBR system combination improves single Best MAX system by +2.0 BLEU points in test, and always improves over MBR. This improvement could arise due to multiple reasons: the expected BLEU gain, the larger evidences space, the extended hypotheses space, or the MERT tuned scaling factor values. Table 3 teases apart these contributions. We first apply MBR-SC to the best system (MBRSC-Expected). Best MBR and MBR-SC-Expected differ only in the gain function: MBR uses sentence level BLEU while MBR-SC-Expected uses the expected BLEU gain described in Section 5. MBRSC-Expected performance is comparable to MBR decoding on the 1000-best list from the single best system. The expected BLEU approximation performs as well as sentence-level BLEU and additionally requires less total computation. We now extend the evidences space to the conjoined 1000-best lists (MBR-SC-E/Conjoin). MBRSC-E/Conjoin is much better than the best MBR on a single system. This implies that either the expected BLEU statistics computed in the conjoined evidences space are stronger or the larger conjoined evidences spaces introduce better hypotheses. When we restrict the BLEU statistics to be computed from only the best system’s evidences space (MBR-SC-E/C/evidences-best), BLEU scores dramatically decrease relative to MBR-SC-E/Conjoin. This implies that the expected BLEU statistics computed over the conjoined 1000-best lists are stronger than the corresponding statistics from the single best system. On the other hand, if we restrict the search space to only the 1000-best list of the best system (MBR-SC-E/C/hypotheses-best), BLEU scores also decrease relative to MBR-SC-E/Conjoin. This implies that the conjoined search space also contains better hypotheses than the single best system’s search space. These results validate our approach. The linear combination of the probability distributions in the conjoined evidences spaces allows to compute much stronger statistics for the expected BLEU gain and also contains some better hypotheses than the single best system’s search space does. We next expand the conjoined evidences spaces using the decoding algorithm described in Section 4.2 (MBR-SC-E/C/Extended). In this case, the expected BLEU statistics are computed from the conjoined 1000-best lists of the three systems, but the hypotheses space where we perform the decoding is expanded to the set of all possible finitelength hypotheses over the vocabulary of the evidences. We take the output of MBR-SC-E/Conjoin as the initial hypotheses of the decoding (see Algorithm 1). MBR-SC-E/C/Extended improves BLEU score of MBR-SC-E/Conjoin but obtains a slightly worse TER score. Since these two systems are identical in their expected BLEU statistics, the improvements in BLEU imply that the extended search space has introduced better hypotheses. The degradation in TER performance can be explained by the use of a BLEU-based gain function in the decoding process. We finally compute the optimum values for the scaling factors of the different system using MERT (MBR-SC-E/C/Ex/MERT). MBR-SCE/C/Ex/MERT slightly improves BLEU score of MBR-SC-E/C/Extended. This implies that the optimal values of the scaling factors do not deviate much from 1.0; a similar result was reported in (Och and Ney, 2001). We hypothesize that this is because the three component systems share the same SMT model, pre-process and decoding. We expect to obtain larger improvements when combining systems implementing different MT paradigms. 1273 30.5 31 31.5 32 32.5 33 100 101 102 103 BLEU Number of hypotheses in the N-best lists Best MAX MBR-SC MBR-SC C/Extended MBR-SC Conjoin Figure 1: Performance of minimum Bayes risk system combination (MBR-SC) for different sizes of the evidences space in comparison to other MBR-SC setups. MBR-SC-E/C/Ex/MERT is the standard setup for MBR system combination and, from now, on we will refer to it as MBR-SC. We next evaluate performance of MBR system combination on N-best lists of increasing sizes, and compare it to MBR-SC-E/C/Extended and MBRSC-E/Conjoin in the same N-best lists. We list the results of the Best MAX system for comparison. Results in Figure 1 confirm the conclusions extracted from results displayed in Table 3. MBR-SCConjoin is consistently better than the Best MAX system, and differences in BLEU increase with the size of the evidences space. This implies that the linear combination of posterior probabilities allow to compute stronger statistics for the expected BLEU gain, and, in addition, the larger the evidences space is, the stronger the computed statistics are. MBR-SC-C/Extended is also consistently better than MBR-SC-Conjoin with an almost constant improvement of +0.4 BLEU points. This result show that the extended search space always contains better hypotheses than the conjoined evidences spaces; also confirms the soundness of Algorithm 1 that allows to reach them. Finally, MBR-SC also slightly improves MBR-SC-C/Extended. The optimization of the scaling factors allows only small improvements in BLEU. Figure 2 display the MBR system combination translation and compare it to the max-derivation translations of the three component systems. Reference translation is also listed for comparison. MBRMAX de→en i will return later . MAX es→en i shall come back to that later . MAX fr→en i will return to this later . MBR-SC i will return to this point later . Reference i will return to this point later . Figure 2: MBR system combination example. SC adds word “point” to create a new translation equal to the reference. MBR-SC is able to detect that this is valuable word even though it does not appear in the max-derivation hypotheses. 6.3 Comparison to System Combination Figure 3 compares MBR system combination (MBR-SC) with state-of-the-art system combination techniques presented to the system combination task of the ACL 2010 workshop on MT (WMT2010). All system combination techniques build a “word sausage” from the outputs of the different component systems and choose a path trough the sausage with the highest score under different models. A description of these systems can be found in (CallisonBurch et al., 2010). In this task, the output of the component systems are single hypotheses or unweighted lists thereof. Therefore, we lack of the statistics of the components’ posteriors which is one of the main advantages of MBR system combination over system combination techniques. However, we find that, even in these constrained setting, MBR system combination performance is similar to the best system combination techniques for all translation directions. These experiments validate our approach. MBR system combination yields state-of-the-art performance while avoiding the challenge of aligning translation hypotheses. 7 Conclusion MBR system combination integrates consensus decoding and system combination into a unified multisystem MBR technique. MBR system combination uses the MBR decision rule on a linear combination of the component systems’ probability distributions to search for the sentence with the minimum Bayes risk on the complete set of finite-length 1274 16 18 20 22 24 26 28 30 32 cz-en en-cz de-en en-de es-en en-es fr-en en-fr BLEU MBR-SC BBN CMU DCU JHU KOC LIUM RWTH Figure 3: Performance of minimum Bayes risk system combination (MBR-SC) for different language directions in comparison to the rest of system combination techniques presented in the WMT2010 system combination task. strings in the output vocabulary. Component systems can have varied decoding strategies; we only require that each system produce an N-best list (or a lattice) of translations. This flexibility allows the technique to be applied quite broadly. For instance, Leusch et al. (2010) generate intermediate translations in several pivot languages, translate them separately into the target language, and generate a consensus translation out of these using a system combination technique. Likewise, these pivot translations could be combined via MBR system combination. MBR system combination has two significant advantages over current approaches to system combination. First, it does not rely on hypothesis alignment between outputs of individual systems. Aligning translation hypotheses can be challenging and has a substantial effect on combination performance (He et al., 2008). Instead of aligning the sentences, we view the sentences as vectors of n-gram counts and compute the expected statistics of the BLEU score to compute the Bayes gain. Second, we do not need to pick a backbone system for combination. Choosing a backbone system can also be challenging and also affects system combination performance (He and Toutanova, 2009). MBR system combination sidesteps this issue by working directly on the conjoined evidences space produced by the outputs of the component systems, and allows the consensus model to express system preferences via scaling factors. Despite its simplicity, MBR system combination provides strong performance by leveraging different consensus, decoding and training techniques. It outperforms best MAX or MBR derivation on each of the component systems. In addition, it obtains stateof-the-art performance in a constrained setting better suited for dominant system combination techniques. Acknowledgements Work supported by the EC (FEDER/FSE) and the Spanish MEC/MICINN under the MIPRCV “Consolider Ingenio 2010” program (CSD2007-00018), the iTrans2 (TIN2009-14511) project, the UPV 1275 under grant 20091027 and the FPU scholarship AP2006-00691. Also supported by the Spanish MITyC under the erudito.com (TSI-020110-2009439) project and by the Generalitat Valenciana under grant Prometeo/2009/014. References Peter J. Bickel and Kjell A Doksum. 1977. Mathematical statistics : basic ideas and selected topics. Holden-Day, San Francisco. Chris Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F. Zaidan. 2010. Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 17–53, Morristown, NJ, USA. Association for Computational Linguistics. John DeNero, David Chiang, and Kevin Knight. 2009. Fast consensus decoding over translation forests. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2, pages 567–575, Morristown, NJ, USA. Association for Computational Linguistics. John DeNero, Shankar Kumar, Ciprian Chelba, and Franz Och. 2010. Model combination for machine translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 975–983, Morristown, NJ, USA. Association for Computational Linguistics. Nan Duan, Mu Li, Dongdong Zhang, and Ming Zhou. 2010. Mixture model-based minimum bayes risk decoding using multiple machine translation systems. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 313– 321, Beijing, China, August. Coling 2010 Organizing Committee. Nicola Ehling, Richard Zens, and Hermann Ney. 2007. Minimum bayes risk decoding for bleu. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 101– 104, Morristown, NJ, USA. Association for Computational Linguistics. Robert Frederking and Sergei Nirenburg. 1994. Three heads are better than one. In Proceedings of the fourth conference on Applied natural language processing, pages 95–100, Morristown, NJ, USA. Association for Computational Linguistics. K.S. Fu. 1982. Syntactic Pattern Recognition and Applications. Prentice Hall. Vaibhava Goel and William J. Byrne. 2000. Minimum bayes-risk automatic speech recognition. Computer Speech & Language, 14(2):115–135. Jes´us Gonz´alez-Rubio and Francisco Casacuberta. 2010. On the use of median string for multi-source translation. In In Proceedings of the International Conference on Pattern Recognition (ICPR2010), pages 4328– 4331. Xiaodong He and Kristina Toutanova. 2009. Joint optimization for machine translation system combination. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3, pages 1202–1211, Morristown, NJ, USA. Association for Computational Linguistics. Xiaodong He, Mei Yang, Jianfeng Gao, Patrick Nguyen, and Robert Moore. 2008. Indirect-hmm-based hypothesis alignment for combining outputs from machine translation systems. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 98–107, Morristown, NJ, USA. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177– 180, Morristown, NJ, USA. Association for Computational Linguistics. Shankar Kumar and William J. Byrne. 2004. Minimum bayes-risk decoding for statistical machine translation. In HLT-NAACL, pages 169–176. Shankar Kumar, Wolfgang Macherey, Chris Dyer, and Franz Och. 2009. Efficient minimum error rate training and minimum bayes-risk decoding for translation hypergraphs and lattices. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1, pages 163–171, Morristown, NJ, USA. Association for Computational Linguistics. Gregor Leusch, Aur´elien Max, Josep Maria Crego, and Hermann Ney. 2010. Multi-pivot translation by system combination. In International Workshop on Spoken Language Translation, Paris, France, December. Zhifei Li, Jason Eisner, and Sanjeev Khudanpur. 2009. Variational decoding for statistical machine translation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Process1276 ing of the AFNLP: Volume 2 - Volume 2, pages 593– 601, Morristown, NJ, USA. Association for Computational Linguistics. C. D. Mart´ınez, A. Juan, and F. Casacuberta. 2000. Use of Median String for Classification. In Proceedings of the 15th International Conference on Pattern Recognition, volume 2, pages 907–910, Barcelona (Spain), September. John A. Nelder and Roger Mead. 1965. A Simplex Method for Function Minimization. The Computer Journal, 7(4):308–313, January. Franz Josef Och and Hermann Ney. 2001. Statistical multi-source translation. In In Machine Translation Summit, pages 253–258. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, pages 160–167, Morristown, NJ, USA. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Morristown, NJ, USA. Association for Computational Linguistics. Antti-Veikko Rosti, Necip Fazil Ayan, Bing Xiang, Spyros Matsoukas, Richard Schwartz, and Bonnie Dorr. 2007. Combining outputs from multiple machine translation systems. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 228–235, Rochester, New York, April. Association for Computational Linguistics. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and Ralph Weischedel. 2006. A study of translation error rate with targeted human annotation. In In Proceedings of the Association for Machine Transaltion in the Americas. Roy W. Tromble, Shankar Kumar, Franz Och, and Wolfgang Macherey. 2008. Lattice minimum bayes-risk decoding for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 620–629, Morristown, NJ, USA. Association for Computational Linguistics. Ying Zhang and Stephan Vogel. 2004. Measuring confidence intervals for the machine translation evaluation metrics. In In Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI-2004, pages 4–6. 1277
2011
127
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1278–1287, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Adjoining Tree-to-String Translation Yang Liu, Qun Liu, and Yajuan L¨u Key Laboratory of Intelligent Information Processing Institute of Computing Technology Chinese Academy of Sciences P.O. Box 2704, Beijing 100190, China {yliu,liuqun,lvyajuan}@ict.ac.cn Abstract We introduce synchronous tree adjoining grammars (TAG) into tree-to-string translation, which converts a source tree to a target string. Without reconstructing TAG derivations explicitly, our rule extraction algorithm directly learns tree-to-string rules from aligned Treebank-style trees. As tree-to-string translation casts decoding as a tree parsing problem rather than parsing, the decoder still runs fast when adjoining is included. Less than 2 times slower, the adjoining tree-tostring system improves translation quality by +0.7 BLEU over the baseline system only allowing for tree substitution on NIST ChineseEnglish test sets. 1 Introduction Syntax-based translation models, which exploit hierarchical structures of natural languages to guide machine translation, have become increasingly popular in recent years. So far, most of them have been based on synchronous context-free grammars (CFG) (Chiang, 2007), tree substitution grammars (TSG) (Eisner, 2003; Galley et al., 2006; Liu et al., 2006; Huang et al., 2006; Zhang et al., 2008), and inversion transduction grammars (ITG) (Wu, 1997; Xiong et al., 2006). Although these formalisms present simple and precise mechanisms for describing the basic recursive structure of sentences, they are not powerful enough to model some important features of natural language syntax. For example, Chiang (2006) points out that the translation of languages that can stack an unbounded number of clauses in an “inside-out” way (Wu, 1997) provably goes beyond the expressive power of synchronous CFG and TSG. Therefore, it is necessary to find ways to take advantage of more powerful synchronous grammars to improve machine translation. Synchronous tree adjoining grammars (TAG) (Shieber and Schabes, 1990) are a good candidate. As a formal tree rewriting system, TAG (Joshi et al., 1975; Joshi, 1985) provides a larger domain of locality than CFG to state linguistic dependencies that are far apart since the formalism treats trees as basic building blocks. As a mildly context-sensitive grammar, TAG is conjectured to be powerful enough to model natural languages. Synchronous TAG generalizes TAG by allowing the construction of a pair of trees using the TAG operations of substitution and adjoining on tree pairs. The idea of using synchronous TAG in machine translation has been pursued by several researchers (Abeille et al., 1990; Prigent, 1994; Dras, 1999), but only recently in its probabilistic form (Nesson et al., 2006; DeNeefe and Knight, 2009). Shieber (2007) argues that probabilistic synchronous TAG possesses appealing properties such as expressivity and trainability for building a machine translation system. However, one major challenge for applying synchronous TAG to machine translation is computational complexity. While TAG requires O(n6) time for monolingual parsing, synchronous TAG requires O(n12) for bilingual parsing. One solution is to use tree insertion grammars (TIG) introduced by Schabes and Waters (1995). As a restricted form of TAG, TIG still allows for adjoining of unbounded trees but only requires O(n3) time for monolingual parsing. Nesson et al. (2006) firstly demonstrate 1278 oÚ zˇongtˇong NN NP President X , α1 {I mˇeigu´o NR NP US X , α2 NP∗ NP↓ NP X∗ X↓ X , β1 NP NP∗ NP NN oÚ zˇongtˇong X X∗ X President , β2 NP NP NR {I mˇeigu´o NP NN oÚ zˇongtˇong X X US X President , α3 Figure 1: Initial and auxiliary tree pairs. The source side (Chinese) is a Treebank-style linguistic tree. The target side (English) is a purely structural tree using a single non-terminal (X). By convention, substitution and foot nodes are marked with a down arrow (↓) and an asterisk (∗), respectively. The dashed lines link substitution sites (e.g., NP↓and X↓in β1) and adjoining sites (e.g., NP and X in α2) in tree pairs. Substituting the initial tree pair α1 at the NP↓-X↓ node pair in the auxiliary tree pair β1 yields a derived tree pair β2, which can be adjoined at NN-X in α2 to generate α3. the use of synchronous TIG for machine translation and report promising results. DeNeefe and Knight (2009) prove that adjoining can improve translation quality significantly over a state-of-the-art stringto-tree system (Galley et al., 2006) that uses synchronous TSG with tractable computational complexity. In this paper, we introduce synchronous TAG into tree-to-string translation (Liu et al., 2006; Huang et al., 2006), which is the simplest and fastest among syntax-based approaches (Section 2). We propose a new rule extraction algorithm based on GHKM (Galley et al., 2004) that directly induces a synchronous TAG from an aligned and parsed bilingual corpus without converting Treebank-style trees to TAG derivations explicitly (Section 3). As tree-tostring translation takes a source parse tree as input, the decoding can be cast as a tree parsing problem (Eisner, 2003): reconstructing TAG derivations from a derived tree using tree-to-string rules that allow for both substitution and adjoining. We describe how to convert TAG derivations to translation forest (Section 4). We evaluated the new tree-to-string system on NIST Chinese-English tests and obtained consistent improvements (+0.7 BLEU) over the STSGbased baseline system without significant loss in efficiency (1.6 times slower) (Section 5). 2 Model A synchronous TAG consists of a set of linked elementary tree pairs: initial and auxiliary. An initial tree is a tree of which the interior nodes are all labeled with non-terminal symbols, and the nodes on the frontier are either words or non-terminal symbols marked with a down arrow (↓). An auxiliary tree is defined as an initial tree, except that exactly one of its frontier nodes must be marked as foot node (∗). The foot node must be labeled with a nonterminal symbol that is the same as the label of the root node. Synchronous TAG defines two operations to build derived tree pairs from elementary tree pairs: substitution and adjoining. Nodes in initial and auxiliary tree pairs are linked to indicate the correspondence between substitution and adjoining sites. Figure 1 shows three initial tree pairs (i.e., α1, α2, and α3) and two auxiliary tree pairs (i.e., β1 and β2). The dashed lines link substitution nodes (e.g., NP↓and X↓in β1) and adjoining sites (e.g., NP and X in α2) in tree pairs. Substituting the initial tree pair α1 at 1279 {I mˇeigu´o oÚ zˇongtˇong nê `aob¯amˇa é du`ı l qi¯angj¯ı ¯‡ sh`ıji`an ƒ± yˇuyˇı gI qiˇanz´e 0 1 2 3 4 5 6 7 8 NR NN NR P NN NN VV NN NP NP NP NP NP NP PP VP NP VP IP US President Obama has condemned the shooting incident Figure 2: A training example. Tree-to-string rules can be extracted from shaded nodes. node minimal initial rule minimal auxiliary rule NR0,1 [1] ( NR mˇeigu´o ) →US NP0,1 [2] ( NP ( x1:NR↓) ) →x1 NN1,2 [3] ( NN zˇongtˇong ) →President NP1,2 [4] ( NP ( x1:NN↓) ) →x1 [5] ( NP ( x1:NP↓) ( x2:NP↓) ) →x1 x2 [6] ( NP0:1 ( x1:NR↓) ) →x1 [7] ( NP ( x1:NP∗) ( x2:NP↓) ) →x1 x2 NP0,2 [8] ( NP0:2 ( x1:NP∗) ( x2:NP↓) ) →x1 x2 [9] ( NP0:1 ( x1:NN↓) ) →x1 [10] ( NP ( x1:NP↓) ( x2:NP∗) ) →x1 x2 [11] ( NP0:2 ( x1:NP↓) ( x2:NP∗) ) →x1 x2 NR2,3 [12] ( NR `aob¯amˇa ) →Obama NP2,3 [13] ( NP ( x1:NR↓) ) →x1 [14] ( NP ( x1:NP↓) ( x2:NP↓) ) →x1 x2 [15] ( NP0:2 ( x1:NP↓) ( x2:NP↓) ) →x1 x2 [16] ( NP ( x1:NP∗) ( x2:NP↓) ) →x1 x2 NP0,3 [17] ( NP0:1 ( x1:NR↓) ) →x1 [18] ( NP ( x1:NP↓) ( x2:NP∗) ) →x1 x2 [19] ( NP0:1 ( x1:NN↓) ) →x1 [20] ( NP0:1 ( x1:NR↓) ) →x1 NN4,5 [21] ( NN qi¯angj¯ı ) →shooting NN5,6 [22] ( NN sh`ıji`an ) →incident NP4,6 [23] ( NP ( x1:NN↓) ( x2:NN↓) ) →x1 x2 PP3,6 [24] ( PP ( du`ı ) ( x1:NP↓) ) →x1 NN7,8 [25] ( NN qiˇanz´e ) →condemned NP7,8 [26] ( NP ( x1:NN↓) ) →x1 VP6,8 [27] ( VP ( VV yˇuyˇı ) ( x1:NP↓) ) →x1 [28] ( VP ( x1:PP↓) ( x2:VP↓) ) →x2 the x1 VP3,8 [29] ( VP0:1 ( VV yˇuyˇı ) ( x1:NP↓) ) →x1 [30] ( VP ( x1:PP↓) ( x2:VP∗) ) →x2 the x1 IP0,8 [31] ( IP ( x1:NP↓) ( x2:VP↓) ) →x1 has x2 Table 1: Minimal initial and auxiliary rules extracted from Figure 2. Note that an adjoining site has a span as subscript. For example, NP0:1 in rule 6 indicates that the node is an adjoining site linked to a target node dominating the target string spanning from position 0 to position 1 (i.e., x1). The target tree is hidden because tree-to-string translation only considers the target surface string. 1280 the NP↓-X↓node pair in the auxiliary tree pair β1 yields a derived tree pair β2, which can be adjoined at NN-X in α2 to generate α3. For simplicity, we represent α2 as a tree-to-string rule: ( NP0:1 ( NR mˇeigu´o ) ) →US where NP0:1 indicates that the node is an adjoining site linked to a target node dominating the target string spanning from position 0 to position 1 (i.e., “US”). The target tree is hidden because treeto-string translation only considers the target surface string. Similarly, β1 can be written as ( NP ( x1:NP∗) ( x2:NP↓) ) →x1 x2 where x denotes a non-terminal and the subscripts indicate the correspondence between source and target non-terminals. The parameters of a probabilistic synchronous TAG are X α Pi(α) = 1 (1) X α Ps(α|η) = 1 (2) X β Pa(β|η) + Pa(NONE|η) = 1 (3) where α ranges over initial tree pairs, β over auxiliary tree pairs, and η over node pairs. Pi(α) is the probability of beginning a derivation with α; Ps(α|η) is the probability of substituting α at η; Pa(β|η) is the probability of adjoining β at η; finally, Pa(NONE|η) is the probability of nothing adjoining at η. For tree-to-string translation, these parameters can be treated as feature functions of a discriminative framework (Och, 2003) combined with other conventional features such as relative frequency, lexical weight, rule count, language model, and word count (Liu et al., 2006). 3 Rule Extraction Inducing a synchronous TAG from training data often begins with converting Treebank-style parse trees to TAG derivations (Xia, 1999; Chen and Vijay-Shanker, 2000; Chiang, 2003). DeNeefe and Knight (2009) propose an algorithm to extract synchronous TIG rules from an aligned and parsed bilingual corpus. They first classify tree nodes into heads, arguments, and adjuncts using heuristics (Collins, 2003), then transform a Treebank-style tree into a TIG derivation, and finally extract minimallysized rules from the derivation tree and the string on the other side, constrained by the alignments. Probabilistic models can be estimated by collecting counts over the derivation trees. However, one challenge is that there are many TAG derivations that can yield the same derived tree, even with respect to a single grammar. It is difficult to choose appropriate single derivations that enable the resulting grammar to translate unseen data well. DeNeefe and Knight (2009) indicate that the way to reconstruct TIG derivations has a direct effect on final translation quality. They suggest that one possible solution is to use derivation forest rather than a single derivation tree for rule extraction. Alternatively, we extend the GHKM algorithm (Galley et al., 2004) to directly extract tree-to-string rules that allow for both substitution and adjoining from aligned and parsed data. There is no need for transforming a parse tree into a TAG derivation explicitly before rule extraction and all derivations can be easily reconstructed using extracted rules. 1 Our rule extraction algorithm involves two steps: (1) extracting minimal rules and (2) composition. 3.1 Extracting Minimal Rules Figure 2 shows a training example, which consists of a Chinese parse tree, an English string, and the word alignment between them. By convention, shaded nodes are called frontier nodes from which tree-tostring rules can be extracted. Note that the source phrase dominated by a frontier node and its corresponding target phrase are consistent with the word alignment: all words in the source phrase are aligned to all words in the corresponding target phrase and vice versa. We distinguish between three categories of tree1Note that our algorithm does not take heads, complements, and adjuncts into consideration and extracts all possible rules with respect to word alignment. Our hope is that this treatment would make our system more robust in the presence of noisy data. It is possible to use the linguistic preferences as features. We leave this for future work. 1281 to-string rules: 1. substitution rules, in which the source tree is an initial tree without adjoining sites. 2. adjoining rules, in which the source tree is an initial tree with at least one adjoining site. 3. auxiliary rules, in which the source tree is an auxiliary tree. For example, in Figure 1, α1 is a substitution rule, α2 is an adjoining rule, and β1 is an auxiliary rule. Minimal substitution rules are the same with those in STSG (Galley et al., 2004; Liu et al., 2006) and therefore can be extracted directly using GHKM. By minimal, we mean that the interior nodes are not frontier and cannot be decomposed. For example, in Table 2, rule 1 (for short r1) is a minimal substitution rule extracted from NR0,1. Minimal adjoining rules are defined as minimal substitution rules, except that each root node must be an adjoining site. In Table 2, r2 is a minimal substitution rule extracted from NP0,1. As NP0,1 is a descendant of NP0,2 with the same label, NP0,1 is a possible adjoining site. Therefore, r6 can be derived from r2 and licensed as a minimal adjoining rule extracted from NP0,2. Similarly, four minimal adjoining rules are extracted from NP0,3 because it has four frontier descendants labeled with NP. Minimal auxiliary rules are derived from minimal substitution and adjoining rules. For example, in Table 2, r7 and r10 are derived from the minimal substitution rule r5 while r8 and r11 are derived from r15. Note that a minimal auxiliary rule can have adjoining sites (e.g., r8). Table 1 lists 17 minimal substitution rules, 7 minimal adjoining rules, and 7 minimal auxiliary rules extracted from Figure 2. 3.2 Composition We can obtain composed rules that capture rich contexts by substituting and adjoining minimal initial and auxiliary rules. For example, the composition of r12, r17, r25, r26, r29, and r31 yields an initial rule with two adjoining sites: ( IP ( NP0:1 ( NR `aob¯amˇa ) ) ( VP2:3 ( VV yˇuyˇı ) ( NP ( NN qiˇanz´e ) ) ) ) →Obama has condemned Note that the source phrase “`aob¯amˇa . . . yˇuyˇı qiˇanz´e” is discontinuous. Our model allows both the source and target phrases of an initial rule with adjoining sites to be discontinuous, which goes beyond the expressive power of synchronous CFG and TSG. Similarly, the composition of two auxiliary rules r8 and r16 yields a new auxiliary rule: ( NP ( NP ( x1:NP∗) ( x2:NP↓) ) ( x3:NP↓) ) →x1x2x3 We first compose initial rules and then compose auxiliary rules, both in a bottom-up way. To maintain a reasonable grammar size, we follow Liu (2006) to restrict that the tree height of a rule is no greater than 3 and the source surface string is no longer than 7. To learn the probability models Pi(α), Ps(α|η), Pa(β|η), and Pa(NONE|η), we collect and normalize counts over these extracted rules following DeNeefe and Knight (2009). 4 Decoding Given a synchronous TAG and a derived source tree π, a tree-to-string decoder finds the English yield of the best derivation of which the Chinese yield matches π: ˆe = e  arg max D s.t. f(D)=π P(D)  (4) This is called tree parsing (Eisner, 2003) as the decoder finds ways of decomposing π into elementary trees. Tree-to-string decoding with STSG is usually treated as forest rescoring (Huang and Chiang, 2007) that involves two steps. The decoder first converts the input tree into a translation forest using a translation rule set by pattern matching. Huang et al. (2006) show that this step is a depth-first search with memorization in O(n) time. Then, the decoder searches for the best derivation in the translation forest intersected with n-gram language models and outputs the target string. 2 Decoding with STAG, however, poses one major challenge to forest rescoring. As translation forest only supports substitution, it is difficult to construct a translation forest for STAG derivations because of 2Mi et al. (2008) give a detailed description of the two-step decoding process. Huang and Mi (2010) systematically analyze the decoding complexity of tree-to-string translation. 1282 α1 IP0,8 NP2,3 VP3,8 ↓ NR2,3 ↓ α2 NR2,3 nê `aob¯amˇa β1 NP0,3 NP1,2 NP2,3 ∗ NN1,2 ↓ β2 NP0,3 NP0,2 ↓ NP2,3 ∗ β3 NP0,2 NP0,1 NP1,2 ∗ NR0,1 ↓ α3 NN2,3 oÚ zˇongtˇong elementary tree translation rule α1 r1 ( IP ( NP0:1 ( x1:NR↓) ) ( x2:VP↓) ) →x1 x2 α2 r2 ( NR `aob¯amˇa ) →Obama β1 r3 ( NP ( NP0:1 ( x1:NN↓) ) ( x2:NP∗) ) →x1 x2 β2 r4 ( NP ( x1:NP↓) ( x2:NP∗) ) →x1 x2 β3 r5 ( NP ( NP ( x1:NR↓) ) ( x2:NP∗) ) →x1 x2 α3 r6 ( NN zˇongtˇong ) →President Figure 3: Matched trees and corresponding rules. Each node in a matched tree is annotated with a span as superscript to facilitate identification. For example, IP0,8 in α1 indicates that IP0,8 in Figure 2 is matched. Note that its left child NP2,3 is not its direct descendant in Figure 2, suggesting that adjoining is required at this site. α1 α2(1.1) β1(1) β2(1) β3(1) α3(1.1) IP0,8 NP0,2 VP3,8 NR0,1 NN1,2 NR2,3 e1 e2 e3 e4 hyperedge translation rule e1 r1 + r4 ( IP ( NP ( x1:NP↓) ( NP ( x2:NR↓) ) ) ( x3:VP↓) →x1 x2 x3 e2 r1 + r3 + r5 ( IP ( NP ( NP ( x1:NP↓) ( x2:NP↓) ) ( NP ( x3:NR↓) ) ) ( x4:VP↓) ) →x1 x2 x3 x4 e3 r6 ( NN zˇongtˇong ) →President e4 r2 ( NR `aob¯amˇa ) →Obama Figure 4: Converting a derivation forest to a translation forest. In a derivation forest, a node in a derivation forest is a matched elementary tree. A hyperedge corresponds to operations on related trees: substitution (dashed) or adjoining (solid). We use Gorn addresses as tree addresses. α2(1.1) denotes that α2 is substituted in the tree α1 at the node NR2,3 ↓ of address 1.1 (i.e., the first child of the first child of the root node). As translation forest only supports substitution, we combine trees with adjoining sites to form an equivalent tree without adjoining sites. Rules are composed accordingly (e.g., r1 + r4). 1283 adjoining. Therefore, we divide forest rescoring for STAG into three steps: 1. matching, matching STAG rules against the input tree to obtain a TAG derivation forest; 2. conversion, converting the TAG derivation forest into a translation forest; 3. intersection, intersecting the translation forest with an n-gram language model. Given a tree-to-string rule, rule matching is to find a subtree of the input tree that is identical to the source side of the rule. While matching STSG rules against a derived tree is straightforward, it is somewhat non-trivial for STAG rules that move beyond nodes of a local tree. We follow Liu et al. (2006) to enumerate all elementary subtrees and match STAG rules against these subtrees. This can be done by first enumerating all minimal initial and auxiliary trees and then combining them to obtain composed trees, assuming that every node in the input tree is frontier (see Section 3). We impose the same restrictions on the tree height and length as in rule extraction. Figure 3 shows some matched trees and corresponding rules. Each node in a matched tree is annotated with a span as superscript to facilitate identification. For example, IP0,8 in α1 means that IP0,8 in Figure 2 is matched. Note that its left child NP2,3 is not its direct descendant in Figure 2, suggesting that adjoining is required at this site. A TAG derivation tree specifies uniquely how a derived tree is constructed using elementary trees (Joshi, 1985). A node in a derivation tree is an elementary tree and an edge corresponds to operations on related elementary trees: substitution or adjoining. We introduce TAG derivation forest, a compact representation of multiple TAG derivation trees, to encodes all matched TAG derivation trees of the input derived tree. Figure 4 shows part of a TAG derivation forest. The six matched elementary trees are nodes in the derivation forest. Dashed and solid lines represent substitution and adjoining, respectively. We use Gorn addresses as tree addresses: 0 is the address of the root node, p is the address of the pth child of the root node, and p · q is the address of the qth child of the node at the address p. The derivation forest should be interpreted as follows: α2 is substituted in the tree α1 at the node NR2,3 ↓ of address 1.1 (i.e., the first child of the first child of the root node) and β1 is adjoined in the tree α1 at the node NP2,3 of address 1. To take advantage of existing decoding techniques, it is necessary to convert a derivation forest to a translation forest. A hyperedge in a translation forest corresponds to a translation rule. Mi et al. (2008) describe how to convert a derived tree to a translation forest using tree-to-string rules only allowing for substitution. Unfortunately, it is not straightforward to convert a derivation forest including adjoining to a translation forest. To alleviate this problem, we combine initial rules with adjoining sites and associated auxiliary rules to form equivalent initial rules without adjoining sites on the fly during decoding. Consider α1 in Figure 3. It has an adjoining site NP2,3. Adjoining β2 in α1 at the node NP2,3 produces an equivalent initial tree with only substitution sites: ( IP0,8 ( NP0,3 ( NP0,2 ↓ ) ( NP2,3 ( NR2,3 ↓ ) ) ) ( VP3,8 ↓ ) ) The corresponding composed rule r1 + r4 has no adjoining sites and can be added to translation forest. We define that the elementary trees needed to be composed (e.g., α1 and β2) form a composition tree in a derivation forest. A node in a composition tree is a matched elementary tree and an edge corresponds to adjoining operations. The root node must be an initial tree with at least one adjoining site. The descendants of the root node must all be auxiliary trees. For example, ( α1 ( β2 ) ) and ( α1 ( β1 ( β3 ) ) ) are two composition trees in Figure 4. The number of children of a node in a composition tree depends on the number of adjoining sites in the node. We use composition forest to encode all possible composition trees. Often, a node in a composition tree may have multiple matched rules. As a large amount of composition trees and composed rules can be identified and constructed on the fly during forest conversion, we used cube pruning (Chiang, 2007; Huang and Chiang, 2007) to achieve a balance between translation quality and decoding efficiency. 1284 category description number VP verb phrase 12.40 NP noun phrase 7.69 IP simple clause 7.26 QP quantifier phrase 0.14 CP clause headed by C 0.10 PP preposition phrase 0.09 CLP classifier phrase 0.02 ADJP adjective phrase 0.02 LCP phrase formed by “XP+LC” 0.02 DNP phrase formed by “XP+DEG” 0.01 Table 2: Top-10 phrase categories of foot nodes and their average occurrences in training corpus. 5 Evaluation We evaluated our adjoining tree-to-string translation system on Chinese-English translation. The bilingual corpus consists of 1.5M sentences with 42.1M Chinese words and 48.3M English words. The Chinese sentences in the bilingual corpus were parsed by an in-house parser. To maintain a reasonable grammar size, we follow Liu et al. (2006) to restrict that the height of a rule tree is no greater than 3 and the surface string’s length is no greater than 7. After running GIZA++ (Och and Ney, 2003) to obtain word alignment, our rule extraction algorithm extracted 23.0M initial rules without adjoining sites, 6.6M initial rules with adjoining sites, and 5.3M auxiliary rules. We used the SRILM toolkit (Stolcke, 2002) to train a 4-gram language model on the Xinhua portion of the GIGAWORD corpus, which contains 238M English words. We used the 2002 NIST MT Chinese-English test set as the development set and the 2003-2005 NIST test sets as the test sets. We evaluated translation quality using the BLEU metric, as calculated by mteval-v11b.pl with case-insensitive matching of n-grams. Table 2 shows top-10 phrase categories of foot nodes and their average occurrences in training corpus. We find that VP (verb phrase) is most likely to be the label of a foot node in an auxiliary rule. On average, there are 12.4 nodes labeled with VP are identical to one of its ancestors per tree. NP and IP are also found to be foot node labels frequently. Figure 4 shows the average occurrences of foot node labels VP, NP, and IP over various distances. A distance is the difference of levels between a foot node 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 0 1 2 3 4 5 6 7 8 9 10 11 average occurrence distance VP IP NP Figure 5: Average occurrences of foot node labels VP, NP, and IP over various distances. system grammar MT03 MT04 MT05 Moses 33.10 33.96 32.17 hierarchical SCFG 33.40 34.65 32.88 STSG 33.13 34.55 31.94 tree-to-string STAG 33.64 35.28 32.71 Table 3: BLEU scores on NIST Chinese-English test sets. Scores marked in bold are significantly better that those of STSG at pl.01 level. and the root node. For example, in Figure 2, the distance between NP0,1 and NP0,3 is 2 and the distance between VP6,8 and VP3,8 is 1. As most foot nodes are usually very close to the root nodes, we restrict that a foot node must be the direct descendant of the root node in our experiments. Table 3 shows the BLEU scores on the NIST Chinese-English test sets. Our baseline system is the tree-to-string system using STSG (Liu et al., 2006; Huang et al., 2006). The STAG system outperforms the STSG system significantly on the MT04 and MT05 test sets at pl.01 level. Table 3 also gives the results of Moses (Koehn et al., 2007) and an in-house hierarchical phrase-based system (Chiang, 2007). Our STAG system achieves comparable performance with the hierarchical system. The absolute improvement of +0.7 BLEU over STSG is close to the finding of DeNeefe and Knight (2009) on string-to-tree translation. We feel that one major obstacle for achieving further improvement is that composed rules generated on the fly during decoding (e.g., r1 + r3 + r5 in Figure 4) usually have too many non-terminals, making cube pruning in the in1285 STSG STAG matching 0.086 0.109 conversion 0.000 0.562 intersection 0.946 1.064 other 0.012 0.028 total 1.044 1.763 Table 4: Comparison of average decoding time. tersection phase suffering from severe search errors (only a tiny fraction of the search space can be explored). To produce the 1-best translations on the MT05 test set that contains 1,082 sentences, while the STSG system used 40,169 initial rules without adjoining sites, the STAG system used 28,046 initial rules without adjoining sites, 1,057 initial rules with adjoining sites, and 1,527 auxiliary rules. Table 4 shows the average decoding time on the MT05 test set. While rule matching for STSG needs 0.086 second per sentence, the matching time for STAG only increases to 0.109 second. For STAG, the conversion of derivation forests to translation forests takes 0.562 second when we restrict that at most 200 rules can be generated on the fly for each node. As we use cube pruning, although the translation forest of STAG is bigger than that of STSG, the intersection time barely increases. In total, the STAG system runs in 1.763 seconds per sentence, only 1.6 times slower than the baseline system. 6 Conclusion We have presented a new tree-to-string translation system based on synchronous TAG. With translation rules learned from Treebank-style trees, the adjoining tree-to-string system outperforms the baseline system using STSG without significant loss in efficiency. We plan to introduce left-to-right target generation (Huang and Mi, 2010) into the STAG treeto-string system. Our work can also be extended to forest-based rule extraction and decoding (Mi et al., 2008; Mi and Huang, 2008). It is also interesting to introduce STAG into tree-to-tree translation (Zhang et al., 2008; Liu et al., 2009; Chiang, 2010). Acknowledgements The authors were supported by National Natural Science Foundation of China Contracts 60736014, 60873167, and 60903138. We thank the anonymous reviewers for their insightful comments. References Anne Abeille, Yves Schabes, and Aravind Joshi. 1990. Using lexicalized tags for machine translation. In Proc. of COLING 1990. John Chen and K. Vijay-Shanker. 2000. Automated extraction of tags from the penn treebank. In Proc. of IWPT 2000. David Chiang. 2003. Statistical parsing with an automatically extracted tree adjoining grammar. DataOriented Parsing. David Chiang. 2006. An introduction to synchronous grammars. ACL Tutorial. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. David Chiang. 2010. Learning to translate with source and target syntax. In Proc. of ACL 2010. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4). Steve DeNeefe and Kevin Knight. 2009. Synchronous tree adjoining machine translation. In Proc. of EMNLP 2009. Mark Dras. 1999. A meta-level grammar: Redefining synchronous tag for translation and paraphrase. In Proc. of ACL 1999. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proc. of ACL 2003. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proc. of NAACL 2004. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. of ACL 2006. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proc. of ACL 2007. Liang Huang and Haitao Mi. 2010. Efficient incremental decoding for tree-to-string translation. In Proc. of EMNLP 2010. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. of AMTA 2006. Aravind Joshi, L. Levy, and M. Takahashi. 1975. Tree adjunct grammars. Journal of Computer and System Sciences, 10(1). Aravind Joshi. 1985. How much contextsensitivity is necessary for characterizing structural descriptions)tree adjoining grammars. Natural Language 1286 Processing )Theoretical, Computational, and Psychological Perspectives. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL 2007 (poster), pages 77–80, Prague, Czech Republic, June. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-tostring alignment template for statistical machine translation. In Proc. of ACL 2006. Yang Liu, Yajuan L¨u, and Qun Liu. 2009. Improving tree-to-tree translation with packed forests. In Proc. of ACL 2009. Haitao Mi and Liang Huang. 2008. Forest-based translation rule extraction. In Proceedings of EMNLP 2008. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. In Proceedings of ACL/HLT 2008, pages 192–199, Columbus, Ohio, USA, June. Rebecca Nesson, Stuart Shieber, and Alexander Rush. 2006. Induction of probabilistic synchronous treeinsertion grammars for machine translation. In Proc. of AMTA 2006. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz Och. 2003. Minimum error rate training in statistical machine translation. In Proc. of ACL 2003. Gilles Prigent. 1994. Synchronous tags and machine translation. In Proc. of TAG+3. Yves Schabes and Richard Waters. 1995. A cubic-time, parsable formalism that lexicalizes context-free grammar without changing the trees produced. Computational Linguistics, 21(4). Stuart M. Shieber and Yves Schabes. 1990. Synchronous tree-adjoining grammars. In Proc. of COLING 1990. Stuart M. Shieber. 2007. Probabilistic synchronous treeadjoining grammars for machine translation: The argument from bilingual dictionaries. In Proc. of SSST 2007. Andreas Stolcke. 2002. Srilm - an extensible language modeling toolkit. In Proceedings of ICSLP 2002, pages 901–904. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–404. Fei Xia. 1999. Extracting tree adjoining grammars from bracketed corpora. In Proc. of the Fifth Natural Language Processing Pacific Rim Symposium. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proc. of ACL 2006. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, and Sheng Li. 2008. A tree sequence alignment-based tree-to-tree translation model. In Proc. of ACL 2008. 1287
2011
128
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1288–1297, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Enhancing Language Models in Statistical Machine Translation with Backward N-grams and Mutual Information Triggers Deyi Xiong, Min Zhang, Haizhou Li Human Language Technology Institute for Infocomm Research 1 Fusionopolis Way, #21-01 Connexis, Singapore 138632 {dyxiong, mzhang, hli}@i2r.a-star.edu.sg Abstract In this paper, with a belief that a language model that embraces a larger context provides better prediction ability, we present two extensions to standard n-gram language models in statistical machine translation: a backward language model that augments the conventional forward language model, and a mutual information trigger model which captures long-distance dependencies that go beyond the scope of standard n-gram language models. We integrate the two proposed models into phrase-based statistical machine translation and conduct experiments on large-scale training data to investigate their effectiveness. Our experimental results show that both models are able to significantly improve translation quality and collectively achieve up to 1 BLEU point over a competitive baseline. 1 Introduction Language model is one of the most important knowledge sources for statistical machine translation (SMT) (Brown et al., 1993). The standard n-gram language model (Goodman, 2001) assigns probabilities to hypotheses in the target language conditioning on a context history of the preceding n −1 words. Along with the efforts that advance translation models from word-based paradigm to syntax-based philosophy, in recent years we have also witnessed increasing efforts dedicated to extend standard n-gram language models for SMT. We roughly categorize these efforts into two directions: data-volume-oriented and data-depth-oriented. In the first direction, more data is better. In order to benefit from monolingual corpora (LDC news data or news data collected from web pages) that consist of billions or even trillions of English words, huge language models are built in a distributed manner (Zhang et al., 2006; Brants et al., 2007). Such language models yield better translation results but at the cost of huge storage and high computation. The second direction digs deeply into monolingual data to build linguistically-informed language models. For example, Charniak et al. (2003) present a syntax-based language model for machine translation which is trained on syntactic parse trees. Again, Shen et al. (2008) explore a dependency language model to improve translation quality. To some extent, these syntactically-informed language models are consistent with syntax-based translation models in capturing long-distance dependencies. In this paper, we pursue the second direction without resorting to any linguistic resources such as a syntactic parser. With a belief that a language model that embraces a larger context provides better prediction ability, we learn additional information from training data to enhance conventional n-gram language models and extend their ability to capture richer contexts and long-distance dependencies. In particular, we integrate backward n-grams and mutual information (MI) triggers into language models in SMT. In conventional n-gram language models, we look at the preceding n −1 words when calculating the probability of the current word. We henceforth call the previous n −1 words plus the current word as forward n-grams and a language model built 1288 on forward n-grams as forward n-gram language model. Similarly, backward n-grams refer to the succeeding n −1 words plus the current word. We train a backward n-gram language model on backward n-grams and integrate the forward and backward language models together into the decoder. In doing so, we attempt to capture both the preceding and succeeding contexts of the current word. Different from the backward n-gram language model, the MI trigger model still looks at previous contexts, which however go beyond the scope of forward n-grams. If the current word is indexed as wi, the farthest word that the forward n-gram includes is wi−n+1. However, the MI triggers are capable of detecting dependencies between wi and words from w1 to wi−n. By these triggers ({wk →wi}, 1 ≤ k ≤i −n), we can capture long-distance dependencies that are outside the scope of forward n-grams. We integrate the proposed backward language model and the MI trigger model into a state-ofthe-art phrase-based SMT system. We evaluate the effectiveness of both models on Chinese-toEnglish translation tasks with large-scale training data. Compared with the baseline which only uses the forward language model, our experimental results show that the additional backward language model is able to gain about 0.5 BLEU points, while the MI trigger model gains about 0.4 BLEU points. When both models are integrated into the decoder, they collectively improve the performance by up to 1 BLEU point. The paper is structured as follows. In Section 2, we will briefly introduce related work and show how our models differ from previous work. Section 3 and 4 will elaborate the backward language model and the MI trigger model respectively in more detail, describe the training procedures and explain how the models are integrated into the phrase-based decoder. Section 5 will empirically evaluate the effectiveness of these two models. Section 6 will conduct an indepth analysis. In the end, we conclude in Section 7. 2 Related Work Previous work devoted to improving language models in SMT mostly focus on two categories as we mentioned before1: large language models (Zhang et al., 2006; Emami et al., 2007; Brants et al., 2007; Talbot and Osborne, 2007) and syntax-based language models (Charniak et al., 2003; Shen et al., 2008; Post and Gildea, 2008). Since our philosophy is fundamentally different from them in that we build contextually-informed language models by using backward n-grams and MI triggers, we discuss previous work that explore these two techniques (backward n-grams and MI triggers) in this section. Since the context “history” in the backward language model (BLM) is actually the future words to be generated, BLM is normally used in a postprocessing where all words have already been generated or in a scenario where sentences are proceeded from the ending to the beginning. Duchateau et al. (2002) use the BLM score as a confidence measure to detect wrongly recognized words in speech recognition. Finch and Sumita (2009) use the BLM in their reverse translation decoder where source sentences are proceeded from the ending to the beginning. Our BLM is different from theirs in that we access the BLM during decoding (rather than after decoding) where source sentences are still proceeded from the beginning to the ending. Rosenfeld et al. (1994) introduce trigger pairs into a maximum entropy based language model as features. The trigger pairs are selected according to their mutual information. Zhou (2004) also propose an enhanced language model (MI-Ngram) which consists of a standard forward n-gram language model and an MI trigger model. The latter model measures the mutual information of distancedependent trigger pairs. Our MI trigger model is mostly inspired by the work of these two papers, especially by Zhou’s MI-Ngram model (2004). The difference is that our model is distance-independent and, of course, we are interested in an SMT problem rather than a speech recognition one. Raybaud et al. (2009) use MI triggers in their confidence measures to assess the quality of translation results after decoding. Our method is different from theirs in the MI calculation and trigger pair selection. Mauser et al. (2009) propose bilingual triggers where two source words trigger one target word to 1Language model adaptation is not very related to our work so we ignore it. 1289 improve lexical choice of target words. Our analysis (Section 6) show that our monolingual triggers can also help in the selection of target words. 3 Backward Language Model Given a sequence of words wm 1 = (w1...wm), a standard forward n-gram language model assigns a probability Pf(wm 1 ) to wm 1 as follows. Pf(wm 1 ) = m ∏ i=1 P(wi|wi−1 1 ) ≈ m ∏ i=1 P(wi|wi−1 i−n+1) (1) where the approximation is based on the nth order Markov assumption. In other words, when we predict the current word wi, we only consider the preceding n −1 words wi−n+1...wi−1 instead of the whole context history w1...wi−1. Different from the forward n-gram language model, the backward n-gram language model assigns a probability Pb(wm 1 ) to wm 1 by looking at the succeeding context according to Pb(wm 1 ) = m ∏ i=1 P(wi|wm i+1) ≈ m ∏ i=1 P(wi|wi+n−1 i+1 ) (2) 3.1 Training For the convenience of training, we invert the order in each sentence in the training data, i.e., from the original order (w1...wm) to the reverse order (wm...w1). In this way, we can use the same toolkit that we use to train a forward n-gram language model to train a backward n-gram language model without any other changes. To be consistent with training, we also need to reverse the order of translation hypotheses when we access the trained backward language model2. Note that the Markov context history of Eq. (2) is wi+n−1...wi+1 instead of wi+1...wi+n−1 after we invert the order. The words are the same but the order is completely reversed. 3.2 Decoding In this section, we will present two algorithms to integrate the backward n-gram language model into two kinds of phrase-based decoders respectively: 1) a CKY-style decoder that adopts bracketing transduction grammar (BTG) (Wu, 1997; Xiong 2This is different from the reverse decoding in (Finch and Sumita, 2009) where source sentences are reversed in the order. et al., 2006) and 2) a standard phrase-based decoder (Koehn et al., 2003). Both decoders translate source sentences from the beginning of a sentence to the ending. Wu (1996) introduce a dynamic programming algorithm to integrate a forward bigram language model with inversion transduction grammar. His algorithm is then adapted and extended for integrating forward n-gram language models into synchronous CFGs by Chiang (2007). Our algorithms are different from theirs in two major aspects 1. The string input to the algorithms is in a reverse order. 2. We adopt a different way to calculate language model probabilities for partial hypotheses so that we can utilize incomplete n-grams. Before we introduce the integration algorithms, we define three functions P, L, and R on strings (in a reverse order) over the English terminal alphabet T. The function P is defined as follows. P(wk...w1) = P(wk)...P(wk−n+2|wk...wk−n+3) | {z } a × ∏ 1≤i≤k−n+1 P(wi|wi+n−1...wi+1) | {z } b (3) This function consists of two parts: • The first part (a) calculates incomplete n-gram language model probabilities for word wk to wk−n+2. That means, we calculate the unigram probability for wk (P(wk)), bigram probability for wk−1 (P(wk−1|wk)) and so on until we take n −1-gram probability for wk−n+2 (P(wk−n+2|wk...wk−n+3)). This resembles the way in which the forward language model probability in the future cost is computed in the standard phrase-based SMT (Koehn et al., 2003). • The second part (b) calculates complete ngram backward language model probabilities for word wk−n+1 to w1. The function is different from Chiang’s p function in that his function p only calculates language model probabilities for the complete n-grams. Since 1290 we calculate backward language model probabilities during a beginning-to-ending (left-to-right) decoding process, the succeeding context for the current word is either yet to be generated or incomplete in terms of n-grams. The P function enables us to utilize incomplete succeeding contexts to approximately predict words. Once the succeeding contexts are complete, we can quickly update language model probabilities in an efficient way in our algorithms. The other two functions L and R are defined as follows L(wk...w1) = { wk...wk−n+2, if k ≥n wk...w1, otherwise (4) R(wk...w1) = { wn−1...w1, if k ≥n wk...w1, otherwise (5) The L and R function return the leftmost and rightmost n −1 words from a string in a reverse order respectively. Following Chiang (2007), we describe our algorithms in a deductive system. We firstly show the algorithm3 that integrates the backward language model into a BTG-style decoder (Xiong et al., 2006) in Figure 1. The item [A, i, j; l|r] indicates that a BTG node A has been constructed spanning from i to j on the source side with the leftmost|rightmost n −1 words l|r on the target side. As mentioned before, all target strings assessed by the defined functions (P, L, and R) are in an inverted order (denoted by e). We only display the backward language model probability for each item, ignoring all other scores such as phrase translation probabilities. The Eq. (8) in Figure 1 shows how we calculate the backward language model probability for the axiom which applies a BTG lexicon rule to translate a source phrase c into a target phrase e. The Eq. (9) and (10) show how we update the backward language model probabilities for two inference rules which combine two neighboring blocks in a straight and inverted order respectively. The fundamental theories behind this update are P(e1e2) = P(e1)P(e2) P(R(e2)L(e1)) P(R(e2))P(L(e1)) (6) 3It can also be easily adapted to integrate the forward ngram language model. Function Value e1 a1a2a3 e2 b1b2b3 R(e2) b2b1 L(e1) a3a2 P(R(e2)) P(b2)P(b1|b2) P(L(e1)) P(a3)P(a2|a3) P(e1) P(a3)P(a2|a3)P(a1|a3a2) P(e2) P(b3)P(b2|b3)P(b1|b3b2) P(R(e2)L(e1)) P(b2)P(b1|b2) P(a3|b2b1)P(a2|b1a3) P(e1e2) P(b3)P(b2|b3)P(b1|b3b2) P(a3|b2b1)P(a2|b1a3)P(a1|a3a2) Table 1: Values of P, L, and R in a 3-gram example . P(e2e1) = P(e1)P(e2) P(R(e1)L(e2)) P(R(e1))P(L(e2)) (7) Whenever two strings e1 and e2 are concatenated in a straight or inverted order, we can reuse their P values (P(e1) and P(e2)) in terms of dynamic programming. Only the probabilities of boundary words (e.g., R(e2)L(e1) in Eq. (6)) need to be recalculated since they have complete n-grams after the concatenation. Table 1 shows values of P, L, and R in a 3-gram example which helps to verify Eq. (6). These two equations guarantee that our algorithm can correctly compute the backward language model probability of a sentence stepwise in a dynamic programming framework.4 The theoretical time complexity of this algorithm is O(m3|T|4(n−1)) because in the update parts in Eq. (6) and (7) both the numerator and denominator have up to 2(n−1) terminal symbols. This is the same as the time complexity of Chiang’s language model integration (Chiang, 2007). Figure 2 shows the algorithm that integrates the backward language model into a standard phrasebased SMT (Koehn et al., 2003). V denotes a coverage vector which records source words translated so far. The Eq. (11) shows how we update the backward language model probability for a partial hypothesis when it is extended into a longer hypothesis by a target phrase translating an uncovered source 4The start-of-sentence symbol ⟨s⟩and end-of-sentence symbol ⟨/s⟩can be easily added to update the final language model probability when a translation hypothesis covering the whole source sentence is completed. 1291 A →c/e [A, i, j; L(e)|R(e)] : P(e) (8) A →[A1, A2] [A1, i, k; L(e1)|R(e1)] : P(e1) [A2, k + 1, j; L(e2)|R(e2)] : P(e2) [A, i, j; L(e1e2)|R(e1e2)] : P(e1)P(e2) P(R(e2)L(e1)) P(R(e2))P(L(e1)) (9) A →⟨A1, A2⟩[A1, i, k; L(e1)|R(e1)] : P(e1) [A2, k + 1, j; L(e2)|R(e2)] : P(e2) [A, i, j; L(e2e1)|R(e2e1)] : P(e1)P(e2) P(R(e1)L(e2)) P(R(e1))P(L(e2)) (10) Figure 1: Integrating the backward language model into a BTG-style decoder. [V; L(e1)] : P(e1) c/e2 : P(e2) [V′; L(e1e2)] : P(e1)P(e2) P(R(e2)L(e1)) P(R(e2))P(L(e1)) (11) Figure 2: Integrating the backward language model into a standard phrase-based decoder. segment. This extension on the target side is similar to the monotone combination of Eq. (9) in that a newly translated phrase is concatenated to an early translated sequence. 4 MI Trigger Model It is well-known that long-distance dependencies between words are very important for statistical language modeling. However, n-gram language models can only capture short-distance dependencies within an n-word window. In order to model long-distance dependencies, previous work such as (Rosenfeld et al., 1994) and (Zhou, 2004) exploit trigger pairs. A trigger pair is defined as an ordered 2-tuple (x, y) where word x occurs in the preceding context of word y. It can also be denoted in a more visual manner as x →y with x being the trigger and y the triggered word5. We use pointwise mutual information (PMI) (Church and Hanks, 1990) to measure the strength of the association between x and y, which is defined as follows PMI(x, y) = log( P(x, y) P(x)P(y)) (12) 5In this paper, we require that word x and y occur in the same sentence. Zhou (2004) proposes a new language model enhanced with MI trigger pairs. In his model, the probability of a given sentence wm 1 is approximated as P(wm 1 ) ≈( m ∏ i=1 P(wi|wi−1 i−n+1)) × m ∏ i=n+1 i−n ∏ k=1 exp(PMI(wk, wi, i −k −1)) (13) There are two components in his model. The first component is still the standard n-gram language model. The second one is the MI trigger model which multiples all exponential PMI values for trigger pairs where the current word is the triggered word and all preceding words outside the n-gram window of the current word are triggers. Note that his MI trigger model is distance-dependent since trigger pairs (wk, wi) are sensitive to their distance i −k −1 (zero distance for adjacent words). Therefore the distance between word x and word y should be taken into account when calculating their PMI. In this paper, for simplicity, we adopt a distanceindependent MI trigger model as follows MI(wm 1 ) = m ∏ i=n+1 i−n ∏ k=1 exp(PMI(wk, wi)) (14) We integrate the MI trigger model into the loglinear model of machine translation as an additional knowledge source which complements the standard n-gram language model in capturing long-distance dependencies. By MERT (Och, 2003), we are even able to tune the weight of the MI trigger model against the weight of the standard n-gram language model while Zhou (2004) sets equal weights for both models. 1292 4.1 Training We can use the maximum likelihood estimation method to calculate PMI for each trigger pair by taking counts from training data. Let C(x, y) be the co-occurrence count of the trigger pair (x, y) in the training data. The joint probability of (x, y) is calculated as P(x, y) = C(x, y) ∑ x,y C(x, y) (15) The marginal probabilities of x and y can be deduced from the joint probability as follows P(x) = ∑ y P(x, y) (16) P(y) = ∑ x P(x, y) (17) Since the number of distinct trigger pairs is O(|T|2), the question is how to select valuable trigger pairs. We select trigger pairs according to the following three steps 1. The distance between x and y must not be less than n −1. Suppose we use a 5-gram language model and y = wi , then x ∈{w1...wi−5}. 2. C(x, y) > c. In all our experiments we set c = 10. 3. Finally, we only keep trigger pairs whose PMI value is larger than 0. Trigger pairs whose PMI value is less than 0 often contain stop words, such as “the”, “a”. These stop words have very large marginal probabilities due to their high frequencies. 4.2 Decoding The MI trigger model of Eq. (14) can be directly integrated into the decoder. For the standard phrasebased decoder (Koehn et al., 2003), whenever a partial hypothesis is extended by a new target phrase, we can quickly retrieve the pre-computed PMI value for each trigger pair where the triggered word locates in the newly translated target phrase and the trigger is outside the n-word window of the triggered word. It’s a little more complicated to integrate the MI trigger model into the CKY-style phrase-based decoder. But we still can handle it by dynamic programming as follows MI(e1e2) = MI(e1)MI(e2)MI(e1 →e2) (18) where MI(e1 →e2) represents the PMI values in which a word in e1 triggers a word in e2. It is defined as follows MI(e1 →e2) = ∏ wi∈e2 ∏ wk∈e1 i−k≥n exp(PMI(wk, wi)) (19) 5 Experiments In this section, we conduct large-scale experiments on NIST Chinese-to-English translation tasks to evaluate the effectiveness of the proposed backward language model and MI trigger model in SMT. Our experiments focus on the following two issues: 1. How much improvements can we achieve by separately integrating the backward language model and the MI trigger model into our phrase-based SMT system? 2. Can we obtain a further improvement if we jointly apply both models? 5.1 System Overview Without loss of generality6, we evaluate our models in a phrase-based SMT system which adapts bracketing transduction grammars to phrasal translation (Xiong et al., 2006). The log-linear model of this system can be formulated as w(D) =MT (rl 1..nl) · MR(rm 1..nm)λR · PfL(e)λfL · exp(|e|)λw (20) where D denotes a derivation, rl 1..nl are the BTG lexicon rules which translate source phrases to target phrases, and rm 1..nm are the merging rules which combine two neighboring blocks into a larger block in a straight or inverted order. The translation model MT consists of widely used phrase and lexical translation probabilities (Koehn et al., 2003). 6We have discussed how to integrate the backward language model and the MI trigger model into the standard phrase-based SMT system (Koehn et al., 2003) in Section 3.2 and 4.2 respectively. 1293 The reordering model MR predicts the merging order (straight or inverted) by using discriminative contextual features (Xiong et al., 2006). PfL is the standard forward n-gram language model. If we simultaneously integrate both the backward language model PbL and the MI trigger model MI into the system, the new log-linear model will be formulated as w(D) =MT (rl 1..nl) · MR(rm 1..nm)λR · PfL(e)λfL · PbL(e)λbL · MI(e)λMI · exp(|e|)λw (21) 5.2 Experimental Setup Our training corpora7 consist of 96.9M Chinese words and 109.5M English words in 3.8M sentence pairs. We used all corpora to train our translation model and smaller corpora without the United Nations corpus to build a maximum entropy based reordering model (Xiong et al., 2006). To train our language models and MI trigger model, we used the Xinhua section of the English Gigaword corpus (306 million words). Firstly, we built a forward 5-gram language model using the SRILM toolkit (Stolcke, 2002) with modified Kneser-Ney smoothing. Then we trained a backward 5-gram language model on the same monolingual corpus in the way described in Section 3.1. Finally, we trained our MI trigger model still on this corpus according to the method in Section 4.1. The trained MI trigger model consists of 2.88M trigger pairs. We used the NIST MT03 evaluation test data as the development set, and the NIST MT04, MT05 as the test sets. We adopted the case-insensitive BLEU4 (Papineni et al., 2002) as the evaluation metric, which uses the shortest reference sentence length for the brevity penalty. Statistical significance in BLEU differences is tested by paired bootstrap re-sampling (Koehn, 2004). 5.3 Experimental Results The experimental results on the two NIST test sets are shown in Table 2. When we combine the backward language model with the forward language 7LDC2004E12, LDC2004T08, LDC2005T10, LDC2003E14, LDC2002E18, LDC2005T06, LDC2003E07 and LDC2004T07. Model MT-04 MT-05 Forward (Baseline) 35.67 34.41 Forward+Backward 36.16+ 34.97+ Forward+MI 36.00+ 34.85+ Forward+Backward+MI 36.76+ 35.12+ Table 2: BLEU-4 scores (%) on the two test sets for different language models and their combinations. +: better than the baseline (p < 0.01). model, we obtain 0.49 and 0.56 BLEU points over the baseline on the MT-04 and MT-05 test set respectively. Both improvements are statistically significant (p < 0.01). The MI trigger model also achieves statistically significant improvements of 0.33 and 0.44 BLEU points over the baseline on the MT-04 and MT-05 respectively. When we integrate both the backward language model and the MI trigger model into our system, we obtain improvements of 1.09 and 0.71 BLEU points over the single forward language model on the MT-04 and MT-05 respectively. These improvements are larger than those achieved by using only one model (the backward language model or the MI trigger model). 6 Analysis In this section, we will study more details of the two models by looking at the differences that they make on translation hypotheses. These differences will help us gain some insights into how the presented models improve translation quality. Table 3 shows an example from our test set. The italic words in the hypothesis generated by using the backward language model (F+B) exactly match the reference. However, the italic words in the baseline hypothesis fail to match the reference due to the incorrect position of the word “decree” (法令). We calculate the forward/backward language model score (the logarithm of language model probability) for the italic words in both the baseline and F+B hypothesis according to the trained language models. The difference in the forward language model score is only 1.58, which may be offset by differences in other features in the log-linear translation model. On the other hand, the difference in the backward language model score is 3.52. This larger difference may guarantee that the hypothesis generated by F+B 1294 Source 北京青年报报导, 北京农业局最 近发出一连串的防治及监督法 令 Baseline Beijing Youth Daily reported that Beijing Agricultural decree recently issued a series of control and supervision F+B Beijing Youth Daily reported that Beijing Bureau of Agriculture recently issued a series of prevention and control laws Reference Beijing Youth Daily reported that Beijing Bureau of Agriculture recently issued a series of preventative and monitoring ordinances Table 3: Translation example from the MT-04 test set, comparing the baseline with the backward language model. F+B: forward+backward language model . is better enough to be selected as the best hypothesis by the decoder. This suggests that the backward language model is able to provide useful and discriminative information which is complementary to that given by the forward language model. In Table 4, we present another example to show how the MI trigger model improves translation quality. The major difference in hypotheses of this example is the word choice between “is” and “was”. The new system enhanced with the MI trigger model (F+M) selects the former while the baseline selects the latter. The forward language model score for the baseline hypothesis is -26.41, which is higher than the score of the F+M hypothesis -26.67. This could be the reason why the baseline selects the word “was” instead of “is”. As can be seen, there is another “is” in the preceding context of the word “was” in the baseline hypothesis. Unfortunately, this word “is” is located just outside the scope of the preceding 5-gram context of “was”. The forward 5-gram language model is hence not able to take it into account when calculating the probability of “was”. However, this is not a problem for the MI trigger model. Since “is” and “was” rarely co-occur in the same sentence, the PMI value of the trigger pair (is, was)8 is -1.03 8Since we remove all trigger pairs whose PMI value is negative, the PMI value of this pair (is, was) is set 0 in practice in the decoder. Source 自卫队此行之所以引人瞩目, 是 因为它并非是一个孤立的事件 。 Baseline Self-Defense Force ’s trip is remarkable , because it was not an isolated incident . F+M Self-Defense Force ’s trip is remarkable , because it is not an isolated incident . Reference The Self-Defense Forces’ trip arouses attention because it is not an isolated incident. Table 4: Translation example from the MT-04 test set, comparing the baseline with the MI trigger model. Both system outputs are not detokenized so that we can see how language model scores are calculated. The underlined words highlight the difference between the enhanced models and the baseline. F+M: forward language model + MI trigger model. while the PMI value of the trigger pair (is, is) is as high as 0.32. Therefore our MI trigger model selects “is” rather than “was”.9 This example illustrates that the MI trigger model is capable of selecting correct words by using long-distance trigger pairs. 7 Conclusion We have presented two models to enhance the ability of standard n-gram language models in capturing richer contexts and long-distance dependencies that go beyond the scope of forward n-gram windows. The two models have been integrated into the decoder and have shown to improve a state-ofthe-art phrase-based SMT system. The first model is the backward language model which uses backward n-grams to predict the current word. We introduced algorithms that directly integrate the backward language model into a CKY-style and a standard phrase-based decoder respectively. The second model is the MI trigger model that incorporates long-distance trigger pairs into language modeling. Overall improvements are up to 1 BLEU point on the NIST Chinese-to-English translation tasks with large-scale training data. Further study of the two 9The overall MI trigger model scores (the logarithm of Eq. (14)) of the baseline hypothesis and the F+M hypothesis are 2.09 and 2.25 respectively. 1295 models indicates that backward n-grams and longdistance triggers provide useful information to improve translation quality. In future work, we would like to integrate the backward language model into a syntax-based system in a way that is similar to the proposed algorithm shown in Figure 1. We are also interested in exploring more morphologically- or syntacticallyinformed triggers. For example, a verb in the past tense triggers another verb also in the past tense rather than the present tense. References Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 858– 867, Prague, Czech Republic, June. Association for Computational Linguistics. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Eugene Charniak, Kevin Knight, and Kenji Yamada. 2003. Syntax-based language models for statistical machine translation. In Proceedings of MT Summit IX. Intl. Assoc. for Machine Translation. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22–29. Jacques Duchateau, Kris Demuynck, and Patrick Wambacq. 2002. Confidence scoring based on backward language models. In Proceedings of ICASSP, pages 221–224, Orlando, FL, April. Ahmad Emami, Kishore Papineni, and Jeffrey Sorensen. 2007. Large-scale distributed language modeling. In Proceedings of ICASSP, pages 37–40, Honolulu, HI, April. Andrew Finch and Eiichiro Sumita. 2009. Bidirectional phrase-based statistical machine translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1124– 1132, Singapore, August. Association for Computational Linguistics. Joshua T. Goodman. 2001. A bit of progress in language modeling extended version. Technical report, Microsoft Research. Philipp Koehn, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 58–54, Edmonton, Canada, May-June. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP 2004, pages 388–395, Barcelona, Spain, July. Arne Mauser, Saˇsa Hasan, and Hermann Ney. 2009. Extending statistical machine translation with discriminative and trigger-based lexicon models. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 210–218, Singapore, August. Association for Computational Linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167, Sapporo, Japan, July. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311–318, Philadelphia, Pennsylvania, USA, July. Matt Post and Daniel Gildea. 2008. Parsers as language models for statistical machine translation. In Proceedings of AMTA. Sylvain Raybaud, Caroline Lavecchia, David Langlois, and Kamel Sma¨ıli. 2009. New confidence measures for statistical machine translation. In Proceedings of the International Conference on Agents and Artificial Intelligence, pages 61–68, Porto, Portugal, January. Roni Rosenfeld, Jaime Carbonell, and Alexander Rudnicky. 1994. Adaptive statistical language modeling: A maximum entropy approach. Technical report, Carnegie Mellon University. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL-08: HLT, pages 577–585, Columbus, Ohio, June. Association for Computational Linguistics. Andreas Stolcke. 2002. Srilm–an extensible language modeling toolkit. In Proceedings of the 7th International Conference on Spoken Language Processing, pages 901–904, Denver, Colorado, USA, September. David Talbot and Miles Osborne. 2007. Randomised language modelling for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 512–519, 1296 Prague, Czech Republic, June. Association for Computational Linguistics. Dekai Wu. 1996. A polynomial-time algorithm for statistical machine translation. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 152–158, Santa Cruz, California, USA, June. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum entropy based phrase reordering model for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 521–528, Sydney, Australia, July. Association for Computational Linguistics. Ying Zhang, Almut Silja Hildebrand, and Stephan Vogel. 2006. Distributed language modeling for n-best list re-ranking. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 216–223, Sydney, Australia, July. Association for Computational Linguistics. GuoDong Zhou. 2004. Modeling of long distance context dependency. In Proceedings of Coling, pages 92– 98, Geneva, Switzerland, Aug 23–Aug 27. COLING. 1297
2011
129
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 123–131, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Automatically Extracting Polarity-Bearing Topics for Cross-Domain Sentiment Classification Yulan He Chenghua Lin† Harith Alani Knowledge Media Institute, The Open University Milton Keynes MK7 6AA, UK {y.he,h.alani}@open.ac.uk † School of Engineering, Computing and Mathematics University of Exeter, Exeter EX4 4QF, UK [email protected] Abstract Joint sentiment-topic (JST) model was previously proposed to detect sentiment and topic simultaneously from text. The only supervision required by JST model learning is domain-independent polarity word priors. In this paper, we modify the JST model by incorporating word polarity priors through modifying the topic-word Dirichlet priors. We study the polarity-bearing topics extracted by JST and show that by augmenting the original feature space with polarity-bearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset. Furthermore, using feature augmentation and selection according to the information gain criteria for cross-domain sentiment classification, our proposed approach performs either better or comparably compared to previous approaches. Nevertheless, our approach is much simpler and does not require difficult parameter tuning. 1 Introduction Given a piece of text, sentiment classification aims to determine whether the semantic orientation of the text is positive, negative or neutral. Machine learning approaches to this problem (?; ?; ?; ?; ?; ?) typically assume that classification models are trained and tested using data drawn from some fixed distribution. However, in many practical cases, we may have plentiful labeled examples in the source domain, but very few or no labeled examples in the target domain with a different distribution. For example, we may have many labeled books reviews, but we are interested in detecting the polarity of electronics reviews. Reviews for different produces might have widely different vocabularies, thus classifiers trained on one domain often fail to produce satisfactory results when shifting to another domain. This has motivated much research on sentiment transfer learning which transfers knowledge from a source task or domain to a different but related task or domain (?; ?; ?; ?). Joint sentiment-topic (JST) model (?; ?) was extended from the latent Dirichlet allocation (LDA) model (?) to detect sentiment and topic simultaneously from text. The only supervision required by JST learning is domain-independent polarity word prior information. With prior polarity words extracted from both the MPQA subjectivity lexicon1 and the appraisal lexicon2, the JST model achieves a sentiment classification accuracy of 74% on the movie review data3 and 71% on the multi-domain sentiment dataset4. Moreover, it is also able to extract coherent and informative topics grouped under different sentiment. The fact that the JST model does not required any labeled documents for training makes it desirable for domain adaptation in sentiment classification. Many existing approaches solve the sentiment transfer problem by associating words 1http://www.cs.pitt.edu/mpqa/ 2http://lingcog.iit.edu/arc/appraisal_ lexicon_2007b.tar.gz 3http://www.cs.cornell.edu/people/pabo/ movie-review-data 4http://www.cs.jhu.edu/˜mdredze/ datasets/sentiment/index2.html 123 from different domains which indicate the same sentiment (?; ?). Such an association mapping problem can be naturally solved by the posterior inference in the JST model. Indeed, the polarity-bearing topics extracted by JST essentially capture sentiment associations among words from different domains which effectively overcome the data distribution difference between source and target domains. The previously proposed JST model uses the sentiment prior information in the Gibbs sampling inference step that a sentiment label will only be sampled if the current word token has no prior sentiment as defined in a sentiment lexicon. This in fact implies a different generative process where many of the word prior sentiment labels are observed. The model is no longer “latent”. We propose an alternative approach by incorporating word prior polarity information through modifying the topic-word Dirichlet priors. This essentially creates an informed prior distribution for the sentiment labels and would allow the model to actually be latent and would be consistent with the generative story. We study the polarity-bearing topics extracted by the JST model and show that by augmenting the original feature space with polarity-bearing topics, the performance of in-domain supervised classifiers learned from augmented feature representation improves substantially, reaching the state-of-the-art results of 95% on the movie review data and an average of 90% on the multi-domain sentiment dataset. Furthermore, using simple feature augmentation, our proposed approach outperforms the structural correspondence learning (SCL) (?) algorithm and achieves comparable results to the recently proposed spectral feature alignment (SFA) method (?). Nevertheless, our approach is much simpler and does not require difficult parameter tuning. We proceed with a review of related work on sentiment domain adaptation. We then briefly describe the JST model and present another approach to incorporate word prior polarity information into JST learning. We subsequently show that words from different domains can indeed be grouped under the same polarity-bearing topic through an illustration of example topic words extracted by JST before proposing a domain adaptation approach based on JST. We verify our proposed approach by conducting experiments on both the movie review data and the multi-domain sentiment dataset. Finally, we conclude our work and outline future directions. 2 Related Work There has been significant amount of work on algorithms for domain adaptation in NLP. Earlier work treats the source domain data as “prior knowledge” and uses maximum a posterior (MAP) estimation to learn a model for the target domain data under this prior distribution (?). Chelba and Acero (?) also uses the source domain data to estimate prior distribution but in the context of a maximum entropy (ME) model. The ME model has later been studied in (?) for domain adaptation where a mixture model is defined to learn differences between domains. Other approaches rely on unlabeled data in the target domain to overcome feature distribution differences between domains. Motivated by the alternating structural optimization (ASO) algorithm (?) for multi-task learning, Blitzer et al. (?) proposed structural correspondence learning (SCL) for domain adaptation in sentiment classification. Given labeled data from a source domain and unlabeled data from target domain, SCL selects a set of pivot features to link the source and target domains where pivots are selected based on their common frequency in both domains and also their mutual information with the source labels. There has also been research in exploring careful structuring of features for domain adaptation. Daum´e (?) proposed a kernel-mapping function which maps both source and target domains data to a high-dimensional feature space so that data points from the same domain are twice as similar as those from different domains. Dai et al.(?) proposed translated learning which uses a language model to link the class labels to the features in the source spaces, which in turn is translated to the features in the target spaces. Dai et al. (?) further proposed using spectral learning theory to learn an eigen feature representation from a task graph representing features, instances and class labels. In a similar vein, Pan et al. (?) proposed the spectral feature alignment (SFA) algorithm where some domainindependent words are used as a bridge to construct a bipartite graph to model the co-occurrence relationship between domain-specific words and domain-independent words. Feature clusters are 124 generated by co-align domain-specific and domainindependent words. Graph-based approach has also been studied in (?) where a graph is built with nodes denoting documents and edges denoting content similarity between documents. The sentiment score of each unlabeled documents is recursively calculated until convergence from its neighbors the actual labels of source domain documents and pseudo-labels of target document documents. This approach was later extended by simultaneously considering relations between documents and words from both source and target domains (?). More recently, Seah et al. (?) addressed the issue when the predictive distribution of class label given input data of the domains differs and proposed Predictive Distribution Matching SVM learn a robust classifier in the target domain by leveraging the labeled data from only the relevant regions of multiple sources. 3 Joint Sentiment-Topic (JST) Model Assume that we have a corpus with a collection of D documents denoted by C = {d1, d2, ..., dD}; each document in the corpus is a sequence of Nd words denoted by d = (w1, w2, ..., wNd), and each word in the document is an item from a vocabulary index with V distinct terms denoted by {1, 2, ..., V }. Also, let S be the number of distinct sentiment labels, and T be the total number of topics. The generative process in JST which corresponds to the graphical model shown in Figure ??(a) is as follows: • For each document d, choose a distribution πd ∼Dir(γ). • For each sentiment label l under document d, choose a distribution θd,l ∼Dir(α). • For each word wi in document d – choose a sentiment label li ∼Mult(πd), – choose a topic zi ∼Mult(θd,li), – choose a word wi from ϕlizi, a Multinomial distribution over words conditioned on topic zi and sentiment label li. Gibbs sampling was used to estimate the posterior distribution by sequentially sampling each variable of interest, zt and lt here, from the distribution over w ! z " Nd S*T # $ D l S (a) JST model. w ! z " Nd S*T # $ D l S S ! S (b) Modified JST model. Figure 1: JST model and its modified version. that variable given the current values of all other variables and data. Letting the superscript −t denote a quantity that excludes data from tth position, the conditional posterior for zt and lt by marginalizing out the random variables ϕ, θ, and π is P(zt = j, lt = k|w, z−t, l−t, α, β, γ) ∝ N−t wt,j,k + β N−t j,k + V β · N−t j,k,d + αj,k N−t k,d + P j αj,k · N−t k,d + γ N−t d + Sγ . (1) where Nwt,j,k is the number of times word wt appeared in topic j and with sentiment label k, Nj,k is the number of times words assigned to topic j and sentiment label k, Nj,k,d is the number of times a word from document d has been associated with topic j and sentiment label k, Nk,d is the number of times sentiment label k has been assigned to some word tokens in document d, and Nd is the total number of words in the document collection. In the modified JST model as shown in Figure ??(b), we add an additional dependency link of ϕ on the matrix λ of size S ×V which we use to encode word prior sentiment information into the JST model. For each word w ∈{1, ..., V }, if w is found in the sentiment lexicon, for each l ∈{1, ..., S}, the element λlw is updated as follows λlw =  1 if S(w) = l 0 otherwise , (2) where the function S(w) returns the prior sentiment label of w in a sentiment lexicon, i.e. neutral, posi125 Book DVD Book Elec. Book Kitch. DVD Elec. DVD Kitch. Elec. Kitch. Pos. recommend funni interest pictur interest qualiti concert sound movi recommend sound pleas highli cool topic clear success easili rock listen stori highli excel look easi entertain knowledg paper polit servic favorit bass classic perfect satisfi worth depth awesom follow color clearli stainless sing amaz fun great perform materi strong worth easi accur popular safe talent acoust charact qulati comfort profession Neg. mysteri cop abus problem bore return bore poorli horror cabinet tomtom elimin fbi shock question poor tediou heavi plot low alien break region regardless investig prison mislead design cheat stick stupid replac scari install error cheapli death escap point case crazi defect stori avoid evil drop code plain report dirti disagre flaw hell mess terribl crap dead gap dumb incorrect Table 1: Extracted polarity words by JST on the combined data sets. tive or negative. The matrix λ can be considered as a transformation matrix which modifies the Dirichlet priors β of size S × T × V , so that the word prior polarity can be captured. For example, the word “excellent” with index i in the vocabulary has a positive polarity. The corresponding row vector in λ is [0, 1, 0] with its elements representing neutral, positive, and negative. For each topic j, multiplying λli with βlji, only the value of βlposji is retained, and βlneuji and βlnegji are set to 0. Thus, the word “excellent” can only be drawn from the positive topic word distributions generated from a Dirichlet distribution with parameter βlpos. 4 Polarity Words Extracted by JST The JST model allows clustering different terms which share similar sentiment. In this section, we study the polarity-bearing topics extracted by JST. We combined reviews from the source and target domains and discarded document labels in both domains. There are a total of six different combinations. We then run JST on the combined data sets and listed some of the topic words extracted as shown in Table ??. Words in each cell are grouped under one topic and the upper half of the table shows topic words under the positive sentiment label while the lower half shows topic words under the negative sentiment label. We can see that JST appears to better capture sentiment association distribution in the source and target domains. For example, in the DVD+Elec. set, words from the DVD domain describe a rock concert DVD while words from the Electronics domain are likely relevant to stereo amplifiers and receivers, and yet they are grouped under the same topic by the JST model. Checking the word coverage in each domain reveals that for example “bass” seldom appears in the DVD domain, but appears more often in the Electronics domain. Likewise, in the Book+Kitch. set, “stainless” rarely appears in the Book domain and “interest” does not occur often in the Kitchen domain and they are grouped under the same topic. These observations motivate us to explore polaritybearing topics extracted by JST for cross-domain sentiment classification since grouping words from different domains but bearing similar sentiment has the effect of overcoming the data distribution difference of two domains. 5 Domain Adaptation using JST Given input data x and a class label y, labeled patterns of one domain can be drawn from the joint distribution P(x, y) = P(y|x)P(x). Domain adaptation usually assume that data distribution are different in source and target domains, i.e., Ps(x) ̸= Pt(x). The task of domain adaptation is to predict the label yt i corresponding to xt i in the target domain. We assume that we are given two sets of training data, Ds and Dt, the source domain and target domain data sets, respectively. In the multiclass classification problem, the source domain data consist of labeled instances, Ds = {(xs n; ys n) ∈X × Y : 1 ≤n ≤Ns}, where X is the input space and Y is a finite set of class labels. No class label is given in the target domain, Dt = {xt n ∈X : 1 ≤n ≤ Nt, Nt ≫Ns}. Algorithm ?? shows how to perform domain adaptation using the JST model. The source and target domain data are first merged with document labels discarded. A JST model is then 126 learned from the merged corpus to generate polaritybearing topics for each document. The original documents in the source domain are augmented with those polarity-bearing topics as shown in Step 4 of Algorithm ??, where li zi denotes a combination of sentiment label li and topic zi for word wi. Finally, feature selection is performed according to the information gain criteria and a classifier is then trained from the source domain using the new document representations. The target domain documents are also encoded in a similar way with polarity-bearing topics added into their feature representations. Algorithm 1 Domain adaptation using JST. Input: The source domain data Ds = {(xs n; ys n) ∈X × Y : 1 ≤n ≤N s}, the target domain data, Dt = {xt n ∈X : 1 ≤n ≤N t, N t ≫N s} Output: A sentiment classifier for the target domain Dt 1: Merge Ds and Dt with document labels discarded, D = {(xs n, 1 ≤n ≤N s; xt n, 1 ≤n ≤N t} 2: Train a JST model on D 3: for each document xs n = (w1, w2, ..., wm) ∈Ds do 4: Augment document with polarity-bearing topics generated from JST, xs′ n = (w1, w2, ..., wm, l1 z1, l2 z2, ..., lm zm) 5: Add {xs′ n ; ys n} into a document pool B 6: end for 7: Perform feature selection using IG on B 8: Return a classifier, trained on B As discussed in Section ?? that the JST model directly models P(l|d), the probability of sentiment label given document, and hence document polarity can be classified accordingly. Since JST model learning does not require the availability of document labels, it is possible to augment the source domain data by adding most confident pseudo-labeled documents from the target domain by the JST model as shown in Algorithm ??. 6 Experiments We evaluate our proposed approach on the two datasets, the movie review (MR) data and the multidomain sentiment (MDS) dataset. The movie review data consist of 1000 positive and 1000 negative movie reviews drawn from the IMDB movie archive while the multi-domain sentiment dataset contains four different types of product reviews extracted from Amazon.com including Book, DVD, Electronics and Kitchen appliances. Each category Algorithm 2 Adding pseudo-labeled documents. Input: The target domain data, Dt = {xt n ∈X : 1 ≤n ≤Nt, Nt ≫Ns}, document sentiment classification threshold τ Output: A labeled document pool B 1: Train a JST model parameterized by Λ on Dt 2: for each document xt n ∈Dt do 3: Infer its sentiment class label from JST as ln = arg maxs P(l|xt n; Λ) 4: if P(ln|xt n; Λ) > τ then 5: Add labeled sample (xt n, ln) into a document pool B 6: end if 7: end for of product reviews comprises of 1000 positive and 1000 negative reviews and is considered as a domain. Preprocessing was performed on both of the datasets by removing punctuation, numbers, nonalphabet characters and stopwords. The MPQA subjectivity lexicon is used as a sentiment lexicon in our experiments. 6.1 Experimental Setup While the original JST model can produce reasonable results with a simple symmetric Dirichlet prior, here we use asymmetric prior α over the topic proportions which is learned directly from data using a fixed-point iteration method (?). In our experiment, α was updated every 25 iterations during the Gibbs sampling procedure. In terms of other priors, we set symmetric prior β = 0.01 and γ = (0.05×L)/S, where L is the average document length, and the value of 0.05 on average allocates 5% of probability mass for mixing. 6.2 Supervised Sentiment Classification We performed 5-fold cross validation for the performance evaluation of supervised sentiment classification. Results reported in this section are averaged over 10 such runs. We have tested several classifiers including Na¨ıve Bayes (NB) and support vector machines (SVMs) from WEKA5, and maximum entropy (ME) from MALLET6. All parameters are set to their default values except the Gaussian 5http://www.cs.waikato.ac.nz/ml/weka/ 6http://mallet.cs.umass.edu/ 127 prior variance is set to 0.1 for the ME model training. The results show that ME consistently outperforms NB and SVM on average. Thus, we only report results from ME trained on document vectors with each term weighted according to its frequency. 85 90 95 100 ccuracy (%) Movie Review Book DVD Electronics Kitchen 75 80 85 90 95 100 1 5 10 15 30 50 100 150 200 Accuracy (%) No. of Topics Movie Review Book DVD Electronics Kitchen Figure 2: Classification accuracy vs. no. of topics. The only parameter we need to set is the number of topics T. It has to be noted that the actual number of feature clusters is 3 × T. For example, when T is set to 5, there are 5 topic groups under each of the positive, negative, or neutral sentiment labels and hence there are altogether 15 feature clusters. The generated topics for each document from the JST model were simply added into its bag-of-words (BOW) feature representation prior to model training. Figure ?? shows the classification results on the five different domains by varying the number of topics from 1 to 200. It can be observed that the best classification accuracy is obtained when the number of topics is set to 1 (or 3 feature clusters). Increasing the number of topics results in the decrease of accuracy though it stabilizes after 15 topics. Nevertheless, when the number of topics is set to 15, using JST feature augmentation still outperforms ME without feature augmentation (the baseline model) in all of the domains. It is worth pointing out that the JST model with single topic becomes the standard LDA model with only three sentiment topics. Nevertheless, we have proposed an effective way to incorporate domain-independent word polarity prior information into model learning. As will be shown later in Table ?? that the JST model with word polarity priors incorporated performs significantly better than the LDA model without incorporating such prior information. For comparison purpose, we also run the LDA model and augmented the BOW features with the Method MR MDS Book DVD Elec. Kitch. Baseline 82.53 79.96 81.32 83.61 85.82 LDA 83.76 84.32 85.62 85.4 87.68 JST 94.98 89.95 91.7 88.25 89.85 [YE10] 91.78 82.75 82.85 84.55 87.9 [LI10] 79.49 81.65 83.64 85.65 Table 2: Supervised sentiment classification accuracy. generated topics in a similar way. The best accuracy was obtained when the number of topics is set to 15 in the LDA model. Table ?? shows the classification accuracy results with or without feature augmentation. We have performed significance test and found that LDA performs statistically significant better than Baseline according to a paired t-test with p < 0.005 for the Kitchen domain and with p < 0.001 for all the other domains. JST performs statistically significant better than both Baseline and LDA with p < 0.001. We also compare our method with other recently proposed approaches. Yessenalina et al. (?) explored different methods to automatically generate annotator rationales to improve sentiment classification accuracy. Our method using JST feature augmentation consistently performs better than their approach (denoted as [YE10] in Table ??). They further proposed a two-level structured model (?) for document-level sentiment classification. The best accuracy obtained on the MR data is 93.22% with the model being initialized with sentence-level human annotations, which is still worse than ours. Li et al. (?) adopted a two-stage process by first classifying sentences as personal views and impersonal views and then using an ensemble method to perform sentiment classification. Their method (denoted as [LI10] in Table ??) performs worse than either LDA or JST feature augmentation. To the best of our knowledge, the results achieved using JST feature augmentation are the state-of-the-art for both the MR and the MDS datasets. 6.3 Domain Adaptation We conducted domain adaptation experiments on the MDS dataset comprising of four different domains, Book (B), DVD (D), Electronics (E), and Kitchen appliances (K). We randomly split each do128 main data into a training set of 1,600 instances and a test set of 400 instances. A classifier trained on the training set of one domain is tested on the test set of a different domain. We preformed 5 random splits and report the results averaged over 5 such runs. Comparison with Baseline Models We compare our proposed approaches with two baseline models. The first one (denoted as “Base” in Table ??) is an ME classifier trained without adaptation. LDA results were generated from an ME classifier trained on document vectors augmented with topics generated from the LDA model. The number of topics was set to 15. JST results were obtained in a similar way except that we used the polaritybearing topics generated from the JST model. We also tested with adding pseudo-labeled examples from the JST model into the source domain for ME classifier training (following Algorithm ??), denoted as “JST-PL” in Table ??. The document sentiment classification probability threshold τ was set to 0.8. Finally, we performed feature selection by selecting the top 2000 features according to the information gain criteria (“JST-IG”)7. There are altogether 12 cross-domain sentiment classification tasks. We showed the adaptation loss results in Table ?? where the result for each domain and for each method is averaged over all three possible adaptation tasks by varying the source domain. The adaptation loss is calculated with respect to the in-domain gold standard classification result. For example, the in-domain goal standard for the Book domain is 79.96%. For adapting from DVD to Book, baseline achieves 72.25% and JST gives 76.45%. The adaptation loss is 7.71 for baseline and 3.51 for JST. It can be observed from Table ?? that LDA only improves slightly compared to the baseline with an error reduction of 11%. JST further reduces the error due to transfer by 27%. Adding pseudo-labeled examples gives a slightly better performance compared to JST with an error reduction of 36%. With feature selection, JST-IG outperforms all the other approaches with a relative error reduction of 53%. 7Both values of 0.8 and 2000 were set arbitrarily after an initial run on some held-out data; they were not tuned to optimize test performance. Domain Base LDA JST JST-PL JST-IG Book 10.8 9.4 7.2 6.3 5.2 DVD 8.3 6.1 4.8 4.4 2.9 Electr. 7.9 7.7 6.3 5.4 3.9 Kitch. 7.6 7.6 6.9 6.1 4.4 Average 8.6 7.7 6.3 5.5 4.1 Table 3: Adaptation loss with respect to the in-domain gold standard. The last row shows the average loss over all the four domains. Parameter Sensitivity There is only one parameters to be set in the JSTIG approach, the number of topics. We plot the classification accuracy versus different topic numbers in Figure ?? with the number of topics varying between 1 and 200, corresponding to feature clusters varying between 3 and 600. It can be observed that for the relatively larger Book and DVD data sets, the accuracies peaked at topic number 10, whereas for the relatively smaller Electronics and Kitchen data sets, the best performance was obtained at topic number 50. Increasing topic numbers results in the decrease of classification accuracy. Manually examining the extracted polarity topics from JST reveals that when the topic number is small, each topic cluster contains well-mixed words from different domains. However, when the topic number is large, words under each topic cluster tend to be dominated by a single domain. Comparison with Existing Approaches We compare in Figure ?? our proposed approach with two other domain adaptation algorithms for sentiment classification, SCL and SFA. Each set of bars represent a cross-domain sentiment classification task. The thick horizontal lines are in-domain sentiment classification accuracies. It is worth noting that our in-domain results are slightly different from those reported in (?; ?) due to different random splits. Our proposed JST-IG approach outperforms SCL in average and achieves comparable results to SFA. While SCL requires the construction of a reasonable number of auxiliary tasks that are useful to model “pivots” and “non-pivots”, SFA relies on a good selection of domain-independent features for the construction of bipartite feature graph before running spectral clustering to derive feature clusters. 129 70 75 80 85 uracy (%) D >B E >B K >B B >D E >D K >D 60 65 70 75 80 85 1 5 10 15 30 50 100 150 200 Accuracy (%) No. of topics D >B E >B K >B B >D E >D K >D (a) Adapted to Book and DVD data sets. 80 85 uracy (%) B >E D >E K >E B >K D >K E >K 70 75 80 85 1 5 10 15 30 50 100 150 200 Accuracy (%) No. of topics B >E D >E K >E B >K D >K E >K (b) Adapted to Electronics and Kitchen data sets. Figure 3: Classification accuracy vs. no. of topics. On the contrary, our proposed approach based on the JST model is much simpler and yet still achieves comparable results. 7 Conclusions In this paper, we have studied polarity-bearing topics generated from the JST model and shown that by augmenting the original feature space with polaritybearing topics, the in-domain supervised classifiers learned from augmented feature representation achieve the state-of-the-art performance on both the movie review data and the multi-domain sentiment dataset. Furthermore, using feature augmentation and selection according to the information gain criteria for cross-domain sentiment classification, our proposed approach outperforms SCL and gives similar results as SFA. Nevertheless, our approach is much simpler and does not require difficult parameter tuning. There are several directions we would like to explore in the future. First, polarity-bearing topics generated by the JST model were simply added into the original feature space of documents, it is worth investigating attaching different weight to each topic 79.96 81.32 75 80 85 uracy (%) baseline SCL MI SFA JST IG 79.96 81.32 65 70 75 80 85 D >B E >B K >B B >D E >D K >D Accuracy (%) baseline SCL MI SFA JST IG (a) Adapted to Book and DVD data sets. 83.61 85.82 80 85 90 uracy (%) baseline SCL MI SFA JST IG 83.61 85.82 65 70 75 80 85 90 B >E D >E K >E B >K D >K E >K Accuracy (%) baseline SCL MI SFA JST IG (b) Adapted to Electronics and Kitchen data sets. Figure 4: Comparison with existing approaches. maybe in proportional to the posterior probability of sentiment label and topic given a word estimated by the JST model. Second, it might be interesting to study the effect of introducing a tradeoff parameter to balance the effect of original and new features. Finally, our experimental results show that adding pseudo-labeled examples by the JST model does not appear to be effective. We could possibly explore instance weight strategies (?) on both pseudo-labeled examples and source domain training examples in order to improve the adaptation performance. Acknowledgements This work was supported in part by the EC-FP7 projects ROBUST (grant number 257859). References R.K. Ando and T. Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817–1853. A. Aue and M. Gamon. 2005. Customizing sentiment classifiers to new domains: a case study. In Proceedings of Recent Advances in Natural Language Processing (RANLP). David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 130 2003. Latent Dirichlet allocation. J. Mach. Learn. Res., 3:993–1022. J. Blitzer, M. Dredze, and F. Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL, page 440– 447. C. Chelba and A. Acero. 2004. Adaptation of maximum entropy classifier: Little data can help a lot. In EMNLP. W. Dai, Y. Chen, G.R. Xue, Q. Yang, and Y. Yu. 2008. Translated learning: Transfer learning across different feature spaces. In NIPS, pages 353–360. W. Dai, O. Jin, G.R. Xue, Q. Yang, and Y. Yu. 2009. Eigentransfer: a unified framework for transfer learning. In ICML, pages 193–200. H. Daum´e III and D. Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26(1):101–126. H. Daum´e. 2007. Frustratingly easy domain adaptation. In ACL, pages 256–263. J. Jiang and C.X. Zhai. 2007. Instance weighting for domain adaptation in NLP. In ACL, pages 264–271. A. Kennedy and D. Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence, 22(2):110–125. S. Li, C.R. Huang, G. Zhou, and S.Y.M. Lee. 2010. Employing personal/impersonal views in supervised and semi-supervised sentiment classification. In ACL, pages 414–423. C. Lin and Y. He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM international conference on Information and knowledge management (CIKM), pages 375–384. C. Lin, Y. He, and R. Everson. 2010. A Comparative Study of Bayesian Models for Unsupervised Sentiment Detection. In Proceedings of the 14th Conference on Computational Natural Language Learning (CoNLL), pages 144–152. Ryan McDonald, Kerry Hannan, Tyler Neylon, Mike Wells, and Jeff Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In ACL, pages 432– 439. T. Minka. 2003. Estimating a Dirichlet distribution. Technical report. S.J. Pan, X. Ni, J.T. Sun, Q. Yang, and Z. Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proceedings of the 19th international conference on World Wide Web (WWW), pages 751–760. Bo Pang and Lillian Lee. 2004. A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In ACL, page 271–278. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: sentiment classification using machine learning techniques. In EMNLP, pages 79–86. B. Roark and M. Bacchiani. 2003. Supervised and unsupervised PCFG adaptation to novel domains. In NAACL-HLT, pages 126–133. C.W. Seah, I. Tsang, Y.S. Ong, and K.K. Lee. 2010. Predictive Distribution Matching SVM for Multi-domain Learning. In ECML-PKDD, pages 231–247. Casey Whitelaw, Navendu Garg, and Shlomo Argamon. 2005. Using appraisal groups for sentiment analysis. In Proceedings of the ACM international conference on Information and Knowledge Management (CIKM), pages 625–631. Q. Wu, S. Tan, and X. Cheng. 2009. Graph ranking for sentiment transfer. In ACL-IJCNLP, pages 317–320. Q. Wu, S. Tan, X. Cheng, and M. Duan. 2010. MIEA: a Mutual Iterative Enhancement Approach for CrossDomain Sentiment Classification. In COLING, page 1327-1335. A. Yessenalina, Y. Choi, and C. Cardie. 2010a. Automatically generating annotator rationales to improve sentiment classification. In ACL, pages 336–341. A. Yessenalina, Y. Yue, and C. Cardie. 2010b. MultiLevel Structured Models for Document-Level Sentiment Classification. In EMNLP, pages 1046–1056. Jun Zhao, Kang Liu, and Gen Wang. 2008. Adding redundant features for CRFs-based sentence sentiment classification. In EMNLP, pages 117–126. 131
2011
13
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1298–1307, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Translating from Morphologically Complex Languages: A Paraphrase-Based Approach Preslav Nakov Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 [email protected] Hwee Tou Ng Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 [email protected] Abstract We propose a novel approach to translating from a morphologically complex language. Unlike previous research, which has targeted word inflections and concatenations, we focus on the pairwise relationship between morphologically related words, which we treat as potential paraphrases and handle using paraphrasing techniques at the word, phrase, and sentence level. An important advantage of this framework is that it can cope with derivational morphology, which has so far remained largely beyond the capabilities of statistical machine translation systems. Our experiments translating from Malay, whose morphology is mostly derivational, into English show significant improvements over rivaling approaches based on five automatic evaluation measures (for 320,000 sentence pairs; 9.5 million English word tokens). 1 Introduction Traditionally, statistical machine translation (SMT) models have assumed that the word should be the basic token-unit of translation, thus ignoring any wordinternal morphological structure. This assumption can be traced back to the first word-based models of IBM (Brown et al., 1993), which were initially proposed for two languages with limited morphology: French and English. While several significantly improved models have been developed since then, including phrase-based (Koehn et al., 2003), hierarchical (Chiang, 2005), treelet (Quirk et al., 2005), and syntactic (Galley et al., 2004) models, they all preserved the assumption that words should be atomic. Ignoring morphology was fine as long as the main research interest remained focused on languages with limited (e.g., English, French, Spanish) or minimal (e.g., Chinese) morphology. Since the attention shifted to languages like Arabic, however, the importance of morphology became obvious and several approaches to handle it have been proposed. Depending on the particular language of interest, researchers have paid attention to word inflections and clitics, e.g., for Arabic, Finnish, and Turkish, or to noun compounds, e.g., for German. However, derivational morphology has not been specifically targeted so far. In this paper, we propose a paraphrase-based approach to translating from a morphologically complex language. Unlike previous research, we focus on the pairwise relationship between morphologically related wordforms, which we treat as potential paraphrases, and which we handle using paraphrasing techniques at various levels: word, phrase, and sentence level. An important advantage of this framework is that it can cope with various kinds of morphological wordforms, including derivational ones. We demonstrate its potential on Malay, whose morphology is mostly derivational. The remainder of the paper is organized as follows: Section 2 gives an overview of Malay morphology, Section 3 introduces our paraphrase-based approach to translating from morphologically complex languages, Section 4 describes our dataset and our experimental setup, Section 5 presents and analyses the results, and Section 6 compares our work to previous research. Finally, Section 7 concludes the paper and suggests directions for future work. 1298 2 Malay Morphology and SMT Malay is an Astronesian language, spoken by about 180 million people. It is official in Malaysia, Indonesia, Singapore, and Brunei, and has two major dialects, sometimes regarded as separate languages, which are mutually intelligible, but occasionally differ in orthography/pronunciation and vocabulary: Bahasa Malaysia (lit. ‘language of Malaysia’) and Bahasa Indonesia (lit. ‘language of Indonesia’). Malay is an agglutinative language with very rich morphology. Unlike other agglutinative languages such as Finnish, Hungarian, and Turkish, which are rich in both inflectional and derivational forms, Malay morphology is mostly derivational. Inflectionally,1 Malay is very similar to Chinese: there is no grammatical gender, number, or tense, verbs are not marked for person, etc. In Malay, new words can be formed by the following three morphological processes: • Affixation, i.e., attaching affixes, which are not words themselves, to a word. These can be prefixes (e.g., ajar/‘teach’ →pelajar/‘student’), suffixes (e.g., ajar →ajaran/‘teachings’), circumfixes (e.g., ajar →pengajaran/‘lesson’), and infixes (e.g., gigi/‘teeth’ →gerigi/‘toothed blade’). Infixes only apply to a small number of words and are not productive. • Compounding, i.e., forming a new word by putting two or more existing words together. For example, kereta/‘car’ + api/‘fire’ make kereta api and keretapi in Bahasa Indonesia and Bahasa Malaysia, respectively, both meaning ‘train’. As in English, Malay compounds are written separately, but some stable ones like kerjasama/‘collaboration’ (from kerja/‘work’ and sama/‘same’) are concatenated. Concatenation is also required when a circumfix is applied to a compound, e.g., ambil alih/‘take over’ (ambil/‘take’ + alih/‘move’) is concatenated to form pengambilalihan/‘takeover’ when targeted by the circumfix peng-. . .-an. 1Inflection is variation in the form of a word that is obligatory in some given grammatical context. For example, plays, playing, played are all inflected forms of the verb play. It does not yield a new word and cannot change the part of speech. • Reduplication, i.e., word repetition. In Malay, reduplication requires using a dash. It can be full (e.g., pelajar-pelajar/‘students’), partial (e.g., adik-beradik/‘siblings’, from adik/‘younger brother/sister’), and rhythmic (e.g., gunung-ganang/‘mountains’, from the word gunung/‘mountain’). Malay has very little inflectional morphology, It also has some clitics2, which are not very frequent and are typically spelled concatenated to the preceding word. For example, the politeness marker lah can be added to the command duduk/‘sit down’ to yield duduklah/‘please, sit down’, and the pronoun nya can attach to kereta to form keretanya/‘his car’. Note that clitics are not affixes, and clitic attachment is not a word derivation or a word inflection process. Taken together, affixation, compounding, reduplication, and clitic attachment yield a rich variety of wordforms, which cause data sparseness issues. Moreover, the predominantly derivational nature of Malay morphology limits the applicability of standard techniques such as (1) removing some/all of the source-language inflections, (2) segmenting affixes from the root, and (3) clustering words with the same target translation. For example, if pelajar/‘student’ is an unknown word and lemmatization/stemming reduces it to ajar/‘teach’, would this enable a good translation? Similarly, would segmenting3 pelajar as peN+ ajar, i.e., as ‘person doing the action’ + ‘teach’, make it possible to generate ‘student’ (e.g., as opposed to ‘teacher’)? Finally, if affixes tend to change semantics so much, how likely are we to find morphologically related wordforms that share the same translation? Still, there are many good reasons to believe that morphological processing should help SMT for Malay. Consider affixation, which can yield words with similar semantics that can use each other’s translation options, e.g., diajar/‘be taught (intransitive)’ and diajarkan/‘be taught (transitive)’. However, this cannot be predicted from the affix, e.g., compare minum/‘drink (verb)’ – minuman/‘drink (noun)’ and makan/‘eat’ – makanan/‘food’. 2A clitic is a morpheme that has the syntactic characteristics of a word, but is phonologically bound to another word. For example, ’s is a clitic in The Queen of England’s crown. 3The prefix peN suffers a nasal replacement of the archiphoneme N to become pel in pelajar. 1299 Looking at compounding, it is often the case that the semantics of a compound is a specialization of the semantics of its head, and thus the target language translations available for the head could be usable to translate the whole compound, e.g., compare kerjasama/‘collaboration’ and kerja/‘work’. Alternatively, it might be useful to consider a segmented version of the compound, e.g., kerja sama. Reduplication, among other functions, expresses plural, e.g., pelajar-pelajar/‘students’. Note, however, that it is not used when a quantity or a number word is present, e.g., dua pelajar/‘two students’ and banyak pelajar/‘many students’. Thus, if we do not know how to translate pelajar-pelajar, it would be reasonable to consider the translation options for pelajar since it could potentially contain among its translation options the plural ‘students’. Finally, consider clitics. In some cases, a clitic could express a fine-grained distinction such as politeness, which might not be expressible in the target language; thus, it might be feasible to simply remove it. In other cases, e.g., when it is a pronoun, it might be better to segment it out as a separate word. 3 Method We propose a paraphrase-based approach to Malay morphology, where we use paraphrases at three different levels: word, phrase, and sentence level. First, we transform each development/testing Malay sentence into a word lattice, where we add simplified word-level paraphrasing alternatives for each morphologically complex word. In the lattice, each alternative w′ of an original word w is assigned the weight of Pr(w′|w), which is estimated using pivoting over the English side of the training bitext. Then, we generate sentence-level paraphrases of the training Malay sentences, in which exactly one morphologically complex word is substituted by a simpler alternative. Finally, we extract additional Malay phrases from these sentences, which we use to augment the phrase table with additional translation options to match the alternative wordforms in the lattice. We assign each such additional phrase p′ a probability maxp Pr(p′|p), where p is a Malay phrase that is found in the original training Malay text. The probability is calculated using phrase-level pivoting over the English side of the training bi-text. 3.1 Morphological Analysis Given a Malay word, we build a list of morphologically simpler words that could be derived from it; we also generate alternative word segmentations: (a) words obtainable by affix stripping e.g., pelajaran →pelajar, ajaran, ajar (b) words that are part of a compound word e.g., kerjasama →kerja (c) words appearing on either side of a dash e.g., adik-beradik →adik, beradik (d) words without clitics e.g., keretanya →kereta (e) clitic-segmented word sequences e.g., keretanya →kereta nya (f) dash-segmented wordforms e.g., aceh-nias →aceh - nias (g) combinations of the above. The list is built by reversing the basic morphological processes in Malay: (a) addresses affixation, (b) handles compounding, (c) takes care of reduplication, and (d) and (e) deal with clitics. Strictly speaking, (f) does not necessarily model a morphological process: it proposes an alternative tokenization, but this could make morphological sense too. Note that (g) could cause potential problems when interacting with (f), e.g., adik-beradik would become adik - beradik and then by (a) it would turn into adik - adik, which could cause the SMT system to generate two separate translations for the two instances of adik. To prevent this, we forbid the application of (f) to reduplications. Taking into account that reduplications can be partial, we only allow (f) if |LCS(l,r)| min(|l|,|r|) < 0.5, where l and r are the strings to the left and to the right of the dash, respectively, LCS(x, y) is the longest common character subsequence, not necessarily consecutive, of the strings x and y, and |x| is the length of the string x. For example, LCS(adik,beradik)=adik, and thus, the ratio is 1 (≥0.5) for adik-beradik. Similarly, LCS(gunung,ganang)=gnng, and thus, the ratio is 4/6=0.67 (≥0.5) for gunung-ganang. However, for aceh-nias, it is 1/4=0.25, and thus (f) is applicable. 1300 As an illustration, here are the wordforms we generate for adik-beradiknya/‘his siblings’: adik, adik-beradiknya, adik-beradik nya, adik-beradik, beradiknya, beradik nya, adik nya, and beradik. And for berpelajaran/‘is educated’, we build the list: berpelajaran, pelajaran, pelajar, ajaran, and ajar. Note that the lists do include the original word. To generate the above wordforms, we used two morphological analyzers: a freely available Malay lemmatizer (Baldwin and Awab, 2006), and an inhouse re-implementation of the Indonesian stemmer described in (Adriani et al., 2007). Note that these tools’ objective is to return a single lemma/stem, e.g., they would return adik for adik-beradiknya, and ajar for berpelajaran. However, it was straightforward to modify them to also output the above intermediary wordforms, which the tools were generating internally anyway when looking for the final lemma/stem. Finally, since the two modified analyzers had different strengths and weaknesses, we combined their outputs to increase recall. 3.2 Word-Level Paraphrasing We perform word-level paraphrasing of the Malay sides of the development and the testing bi-texts. First, for each Malay word, we generate the above-described list of morphologically simpler words and alternative word segmentations; we think of the words in this list as word-level paraphrases. Then, for each development/testing Malay sentence, we generate a lattice encoding all possible paraphrasing options for each individual word. We further specify a weight for each arc. We assign 1 to the original Malay word w, and Pr(w′|w) to each paraphrase w′ of w, where Pr(w′|w) is the probability that w′ is a good paraphrase of w. Note that multi-word paraphrases, e.g., resulting from clitic segmentation, are encoded using a sequence of arcs; in such cases, we assign Pr(w′|w) to the first arc, and 1 to each subsequent arc. We calculate the probability Pr(w′|w) using the training Malay-English bi-text, which we align at the word level using IBM model 4 (Brown et al., 1993), and we observe which English words w and w′ are aligned to. More precisely, we use pivoting to estimate the probability Pr(w′|w) as follows: Pr(w′|w) = P i Pr(w′|w, ei)Pr(ei|w) Then, following (Callison-Burch et al., 2006; Wu and Wang, 2007), we make the simplifying assumption that w′ is conditionally independent of w given ei, thus obtaining the following expression: Pr(w′|w) = P i Pr(w′|ei)Pr(ei|w) We estimate the probability Pr(ei|w) directly from the word-aligned training bi-text as follows: Pr(ei|w) = #(w,ei) P j #(w,ej) where #(x, e) is the number of times the Malay word x is aligned to the English word e. Estimating Pr(w′|ei) cannot be done directly since w′ might not be present on the Malay side of the training bi-text, e.g., because it is a multi-token sequence generated by clitic segmentation. Thus, we think of w′ as a pseudoword that stands for the union of all Malay words in the training bi-text that are reducible to w′ by our morphological analysis procedure. So, we estimate Pr(w′|ei) as follows: Pr(w′|ei) = Pr({v : w′ ∈forms(v)}|ei) where forms(x) is the set of the word-level paraphrases4 for the Malay word x. Since the training bi-text occurrences of the words that are reducible to w′ are distinct, we can rewrite the above as follows: Pr(w′|ei) = P v:w′∈forms(v) Pr(v|ei) Finally, the probability Pr(v|ei) can be estimated using maximum likelihood: Pr(v|ei) = #(v,ei) P u #(u,ei) 3.3 Sentence-Level Paraphrasing In order for the word-level paraphrases to work, there should be phrases in the phrase table that could potentially match them. For some of the words, e.g., the lemmata, there could already be such phrases, but for other transformations, e.g., clitic segmentation, this is unlikely. Thus, we need to augment the phrase table with additional translation options. One approach would be to modify the phrase table directly, e.g., by adding additional entries, where one or more Malay words are replaced by their paraphrases. This would be problematic since the phrase translation probabilities associated with these new 4Note that our paraphrasing process is directed: the paraphrases are morphologically simpler than the original word. 1301 entries would be hard to estimate. For example, the clitics, and even many of the intermediate morphological forms, would not exist as individual words in the training bi-text, which means that there would be no word alignments or lexical probabilities available for them. Another option would be to generate separate word alignments for the original training bi-text and for a version of it where the source (Malay) side has been paraphrased. Then, the two bi-texts and their word alignments would be concatenated and used to build a phrase table (Dyer, 2007; Dyer et al., 2008; Dyer, 2009). This would solve the problems with the word alignments and the phrase pair probabilities estimations in a principled manner, but it would require choosing for each word only one of the paraphrases available to it, while we would prefer to have a way to allow all options. Moreover, the paraphrased and the original versions of the corpus would be given equal weights, which might not be desirable. Finally, since the two versions of the bitext would be word-aligned separately, there would be no interaction between them, which might lead to missed opportunities for improved alignments in both parts of the bi-text (Nakov and Ng, 2009). We avoid the above issues by adopting a sentencelevel paraphrasing approach. Following the general framework proposed in (Nakov, 2008), we first create multiple paraphrased versions of the sourceside sentences of the training bi-text. Then, each paraphrased source sentence is paired with its original translation. This augmented bi-text is wordaligned and a phrase table T ′ is built from it, which is merged with a phrase table T for the original bitext. The merged table contains all phrase entries from T, and the entries for the phrase pairs from T ′ that are not in T. Following Nakov and Ng (2009), we add up to three additional indicator features (taking the values 0.5 and 1) to each entry in the merged phrase table, showing whether the entry came from (1) T only, (2) T ′ only, or (3) both T and T ′. We also try using the first one or two features only. We set all feature weights using minimum error rate training (Och, 2003), and we optimize their number (one, two, or three) on the development dataset.5 5In theory, we should re-normalize the probabilities; in practice, this is not strictly required by the log-linear SMT model. Each of our paraphrased sentences differs from its original sentence by a single word, which prevents combinatorial explosions: on average, we generate 14 paraphrased versions per input sentence. It further ensures that the paraphrased parts of the sentences will not dominate the word alignments or the phrase pairs, and that there would be sufficient interaction at word alignment time between the original sentences and their paraphrased versions. 3.4 Phrase-Level Paraphrasing While our sentence-level paraphrasing informs the decoder about the origin of each phrase pair (original or paraphrased bi-text), it provides no indication about how good the phrase pairs from the paraphrased bi-text are likely to be. Following Callison-Burch et al. (2006), we further augment the phrase table with one additional feature whose value is 1 for the phrase pairs coming from the original bi-text, and maxp Pr(p′|p) for the phrase pairs extracted from the paraphrased bitext. Here p is a Malay phrase from T, and p′ is a Malay phrase from T ′ that does not exist in T but is obtainable from p by substituting one or more words in p with their derivationally related forms generated by morphological analysis. The probability Pr(p′|p) is calculated using phrase-level pivoting through English in the original phrase table T as follows (unlike word-level pivoting, here ei is an English phrase): Pr(p′|p) = P i Pr(p′|ei)Pr(ei|p) We estimate the probabilities Pr(ei|p) and Pr(p′|ei) as we did for word-level pivoting, except that this time we use the list of the phrase pairs extracted from the original training bi-text, while before we used IBM model 4 word alignments. When calculating Pr(p′|ei), we think of p′ as the set of all possible Malay phrases q in T that are reducible to p′ by morphological analysis of the words they contain. This can be rewritten as follows: Pr(p′|ei) = P q:p′∈par(q) Pr(q|ei) where par(q) is the set of all possible phrase-level paraphrases for the Malay phrase q. The probability Pr(q|ei) is estimated using maximum likelihood from the list of phrase pairs. There is no combinatorial explosion here, since the phrases are short and contain very few paraphrasable words. 1302 Number of sentence pairs 1K 2K 5K 10K 20K 40K 80K 160K 320K Number of English words 30K 60K 151K 301K 602K 1.2M 2.4M 4.7M 9.5M baseline 23.81 27.43 31.53 33.69 36.68 38.49 40.53 41.80 43.02 lemmatize all 22.67 26.20 29.68 31.53 33.91 35.64 37.17 38.58 39.68 -1.14 -1.23 -1.85 -2.16 -2.77 -2.85 -3.36 -3.22 -3.34 ‘noisier’ channel model (Dyer, 2007) 23.27 28.42 32.66 33.69 37.16 38.14 39.79 41.76 42.77 -0.54 +0.99 +1.13 +0.00 +0.48 -0.35 -0.74 -0.04 -0.25 lattice + sent-par (orig+lemma) 24.71 28.65 32.42 34.95 37.32 38.40 39.82 41.97 43.36 +0.90 +1.22 +0.89 +1.26 +0.64 -0.09 -0.71 +0.17 +0.34 lattice + sent-par 24.97 29.11 33.03 35.12 37.39 38.73 41.04 42.24 43.52 +1.16 +1.68 +1.50 +1.43 +0.71 +0.24 +0.51 +0.44 +0.50 lattice + sent-par + word-par 25.14 29.17 33.00 35.09 37.39 38.76 40.75 42.23 43.58 +1.33 +1.74 +1.47 +1.40 +0.71 +0.27 +0.22 +0.43 +0.56 lattice + sent-par + word-par + phrase-par 25.27 29.19 33.35 35.23 37.46 39.00 40.95 42.30 43.73 +1.46 +1.76 +1.82 +1.54 +0.78 +0.51 +0.42 +0.50 +0.71 Table 1: Evaluation results. Shown are BLEU scores and improvements over the baseline (in %) for different numbers of training sentences. Statistically significant improvements are in bold for p < 0.01 and in italic for p < 0.05. 4 Experiments 4.1 Data We created our Malay-English training and development datasets from data that we downloaded from the Web and then sentence-aligned using various heuristics. Thus, we ended up with 350,003 training sentence pairs, including 10.4M English and 9.7M Malay word tokens. We further downloaded 49.8M word tokens of monolingual English text, which we used for language modeling. For testing, we used 1,420 sentences with 28.8K Malay word tokens, which were translated by three human translators, yielding translations of 32.8K, 32.4K, and 32.9K English word tokens, respectively. For development, we used 2,000 sentence pairs of 63.4K English and 58.5K Malay word tokens. 4.2 General Experimental Setup First, we tokenized and lowercased all datasets: training, development, and testing. We then built directed word-level alignments for the training bitext for English→Malay and for Malay→English using IBM model 4 (Brown et al., 1993), which we symmetrized using the intersect+grow heuristic (Och and Ney, 2003). Next, we extracted phraselevel translation pairs of maximum length seven, which we scored and used to build a phrase table where each phrase pair is associated with the following five standard feature functions: forward and reverse phrase translation probabilities, forward and reverse lexicalized phrase translation probabilities, and phrase penalty. We trained a log-linear model using the following standard SMT feature functions: trigram language model probability, word penalty, distance-based distortion cost, and the five feature functions from the phrase table. We set all weights on the development dataset by optimizing BLEU (Papineni et al., 2002) using minimum error rate training (Och, 2003), and we plugged them in a beam search decoder (Koehn et al., 2007) to translate the Malay test sentences to English. Finally, we detokenized the output, and we evaluated it against the three reference translations. 4.3 Systems Using the above general experimental setup, we implemented the following baseline systems: • baseline. This is the default system, which uses no morphological processing. • lemmatize all. This is the second baseline that uses lemmatized versions of the Malay side of the training, development and testing datasets. • ‘noisier’ channel model.6 This is the model of Dyer (2007). It uses 0-1 weights in the lattice and only allows lemmata as alternative wordforms; it uses no sentence-level or phrase-level paraphrases. 6We also tried the word segmentation model of Dyer (2009) as implemented in the cdec decoder (Dyer et al., 2010), which learns word segmentation lattices from raw text in an unsupervised manner. Unfortunately, it could not learn meaningful word segmentations for Malay, and thus we do not compare against it. We believe this may be due to its focus on word segmentation, which is of limited use for Malay. 1303 sent. system 1-gram 2-gram 3-gram 4-gram 1k baseline 59.78 29.60 17.36 10.46 paraphrases 62.23 31.19 18.53 11.35 2k baseline 64.20 33.46 20.41 12.92 paraphrases 66.38 35.42 21.97 14.06 5k baseline 68.12 38.12 24.20 15.72 paraphrases 70.41 40.13 25.71 17.02 10k baseline 70.13 40.67 26.15 17.27 paraphrases 72.04 42.28 27.55 18.36 20k baseline 73.19 44.12 29.14 19.50 paraphrases 73.28 44.43 29.77 20.31 40k baseline 74.66 45.97 30.70 20.83 paraphrases 75.47 46.54 31.09 21.17 80k baseline 75.72 48.08 32.80 22.59 paraphrases 76.03 48.47 33.20 23.00 160k baseline 76.55 49.21 34.09 23.78 paraphrases 77.14 49.89 34.57 24.06 320k baseline 77.72 50.54 35.19 24.78 paraphrases 78.03 51.24 35.99 25.42 Table 2: Detailed BLEU n-gram precision scores: in %, for different numbers of training sentence pairs, for baseline and lattice + sent-par + word-par + phrase-par. Our full morphological paraphrasing system is lattice + sent-par + word-par + phrase-par. We also experimented with some of its components turned off. lattice + sent-par + word-par excludes the additional feature from phrase-level paraphrasing. lattice + sent-par has all the morphologically simpler derived forms in the lattice during decoding, but their weights are uniformly set to 0 rather than obtained using pivoting from word alignments. Finally, in order to compare closely to the ‘noisier’ channel model, we further limited the morphological variants of lattice + sent-par in the lattice to lemmata only in lattice + sent-par (orig+lemma). 5 Results and Discussion The experimental results are shown in Table 1. First, we can see that lemmatize all has a consistently disastrous effect on BLEU, which shows that Malay morphology does indeed contain information that is important when translating to English. Second, Dyer (2007)’s ‘noisier’ channel model helps for small datasets only. It performs worse than lattice + sent-par (orig+lemma), from which it differs in the phrase table only; this confirms the importance of our sentence-level paraphrasing. Moving down to lattice + sent-par, we can see that using multiple morphological wordforms instead of just lemmata has a consistently positive impact on BLEU for datasets of all sizes. Sent. System BLEU NIST TER METEOR TESLA 1k baseline 23.81 6.7013 64.50 49.26 1.6794 paraphrases 25.27 6.9974 63.03 52.32 1.7579 2k baseline 27.43 7.3790 61.03 54.29 1.8718 paraphrases 29.19 7.7306 59.37 57.32 2.0031 5k baseline 31.53 8.0992 57.12 59.09 2.1172 paraphrases 33.35 8.4127 55.41 61.67 2.2240 10k baseline 33.69 8.5314 55.24 62.26 2.2656 paraphrases 35.23 8.7564 53.60 63.97 2.3634 20k baseline 36.68 8.9604 52.56 64.67 2.3961 paraphrases 37.46 9.0941 52.16 66.42 2.4621 40k baseline 38.49 9.3016 51.20 66.68 2.5166 paraphrases 39.00 9.4184 50.68 67.60 2.5604 80k baseline 40.53 9.6047 49.88 68.77 2.6331 paraphrases 40.95 9.6289 49.09 69.10 2.6628 160k baseline 41.80 9.7479 48.97 69.59 2.6887 paraphrases 42.30 9.8062 48.29 69.62 2.7049 320k baseline 43.02 9.8974 47.44 70.23 2.7398 paraphrases 43.73 9.9945 47.07 70.87 2.7856 Table 3: Results for different evaluation measures: for baseline and lattice + sent-par + word-par + phrase-par (in % for all measures except for NIST). Adding weights obtained using word-level pivoting in lattice + sent-par + word-par helps a bit more, and also using phrase-level paraphrasing weights yields even bigger further improvements for lattice + sent-par + word-par + phrase-par. Overall, our morphological paraphrases yield statistically significant improvements (p < 0.01) in BLEU, according to Collins et al. (2005)’s sign test, for bi-texts as large as 320,000 sentence pairs. A closer look at BLEU. Table 2 shows detailed n-gram BLEU precision scores for n=1,2,3,4. Our system outperforms the baseline on all precision scores and for all numbers of training sentences. Other evaluation measures. Table 3 reports the results for five evaluation measures: BLEU and NIST 11b, TER 0.7.25 (Snover et al., 2006), METEOR 1.0 (Lavie and Denkowski, 2009), and TESLA (Liu et al., 2010). Our system consistently outperforms the baseline for all measures. Example translations. Table 4 shows two translation examples. In the first example, the reduplication bekalan-bekalan (‘supplies’) is an unknown word, and was left untranslated by the baseline system. It was not a problem for our system though, which first paraphrased it as bekalan and then translated it as supply. Even though this is still wrong (we need the plural supplies), it is arguably preferable to passing the word untranslated; it also allowed for a better translation of the surrounding context. 1304 src : Mercy Relief telah menghantar 17 khemah khas bernilai $5,000 setiap satu yang boleh menampung kelas seramai 30 pelajar, selain bekalan-bekalan lain seperti 500 khemah biasa, barang makanan dan ubat-ubatan untuk mangsa gempa Sichuan. ref1: Mercy Relief has sent 17 special tents valued at $5,000 each, that can accommodate a class of 30 students, including other aid supplies such as 500 normal tents, food and medicine for the victims of Sichuan quake. base: mercy relief has sent 17 special tents worth $5,000 each class could accommodate a total of 30 students, besides other bekalan-bekalan 500 tents as usual, foodstuff and medicines for sichuan quake relief. para: mercy relief has sent 17 special tents worth $5,000 each class could accommodate a total of 30 students, besides other supply such as 500 tents, food and medicines for sichuan quake relief. src : Walaupun hidup susah, kami tetap berusaha untuk menjalani kehidupan seperti biasa. ref1: Even though life is difficult, we are still trying to go through life as usual. base: despite the hard life, we will always strive to undergo training as usual. para: despite the hard life, we will always strive to live normal. Table 4: Example translations. For each example, we show a source sentence (src), one of the three reference translations (ref1), and the outputs of baseline (base) and of lattice + sent-par + word-par + phrase-par (para). In the second example, the baseline system translated menjalani kehidupan (lit. ‘go through life’) as undergo training, because of a bad phrase pair, which was extracted from wrong word alignments. Note that the words menjalani (‘go through’) and kehidupan (‘life/existence’) are derivational forms of jalan (‘go’) and hidup (‘life/living’), respectively. Thus, in the paraphrasing system, they were involved in sentence-level paraphrasing, where the alignments were improved. While the wrong phrase pair was still available, the system chose a better one from the paraphrased training bi-text. 6 Related Work Most research in SMT for a morphologically rich source language has focused on inflected forms of the same word. The assumption is that they would have similar semantics and thus could have the same translation. Researchers have used stemming (Yang and Kirchhoff, 2006), lemmatization (Al-Onaizan et al., 1999; Goldwater and McClosky, 2005; Dyer, 2007), or direct clustering (Talbot and Osborne, 2006) to identify such groups of words and use them as equivalence classes or as possible alternatives in translation. Frameworks for the simultaneous use of different word-level representations have been proposed as well (Koehn and Hoang, 2007). A second important line of research has focused on word segmentation, which is useful for languages like German, which are rich in compound words that are spelled concatenated (Koehn and Knight, 2003; Yang and Kirchhoff, 2006), or like Arabic, Turkish, Finnish, and, to a lesser extent, Spanish and Italian, where clitics often attach to the preceding word (Habash and Sadat, 2006). For languages with more or less regular inflectional morphology like Arabic or Turkish, another good idea is to segment words into morpheme sequences, e.g., prefix(es)stem-suffix(es), which can be used instead of the original words (Lee, 2004) or in addition to them. This can be achieved using a lattice input to the translation system (Dyer et al., 2008; Dyer, 2009). Unfortunately, none of these general lines of research suits Malay well, whose compounds are rarely concatenated, clitics are not so frequent, and morphology is mostly derivational, and thus likely to generate words whose semantics substantially differs from the semantics of the original word. Therefore, we cannot expect the existence of equivalence classes: it is only occasionally that two derivationally related wordforms would share the same target language translation. Thus, instead of looking for equivalence classes, we have focused on the pairwise relationship between derivationally related wordforms, which we treat as potential paraphrases. Our approach is an extension of the ‘noisier’ channel model of Dyer (2007). He starts by generating separate word alignments for the original training bi-text and for a version of it where the source side has been lemmatized. Then, the two bi-texts and their word alignments are concatenated and used to build a phrase table. Finally, the source sides of the development and the test datasets are converted into confusion networks where additional arcs are added for word lemmata. The arc weights are set to 1 for the original wordforms and to 0 for the lemmata. In contrast, we provide multiple paraphrasing alternatives for each morphologically complex word, including derivational forms that occupy intermediary positions between the original wordform 1305 and its lemma. Note that some of those paraphrasing alternatives are multi-word, and thus we use a lattice instead of a confusion network. Moreover, we give different weights to the different alternatives rather then assigning them all 0. Second, our work is related to that of Dyer et al. (2008), who use a lattice to add a single alternative clitic-segmented version of the original word for Arabic. However, we provide multiple alternatives. We also include derivational forms in addition to clitic-segmented ones, and we give different weights to the different alternatives (instead of 0). Third, our work is also related to that of Dyer (2009), who uses a lattice to add multiple alternative segmented versions of the original word for German, Hungarian, and Turkish. However, we focus on derivational morphology rather than on clitics and inflections, add derivational forms in addition to clitic-segmented ones, and use cross-lingual word pivoting to estimate paraphrase probabilities. Finally, our work is related to that of CallisonBurch et al. (2006), who use cross-lingual pivoting to generate phrase-level paraphrases with corresponding probabilities. However, our paraphrases are derived through morphological analysis; thus, we do not need corpora in additional languages. 7 Conclusion and Future Work We have presented a novel approach to translating from a morphologically complex language, which uses paraphrases and paraphrasing techniques at three different levels of translation: wordlevel, phrase-level, and sentence-level. Our experiments translating from Malay, whose morphology is mostly derivational, into English have shown significant improvements over rivaling approaches based on several automatic evaluation measures. In future work, we want to improve the probability estimations for our paraphrasing models. We also want to experiment with other morphologically complex languages and other SMT models. Acknowledgments This work was supported by research grant POD0713875. We would like to thank the anonymous reviewers for their detailed and constructive comments, which have helped us improve the paper. References Mirna Adriani, Jelita Asian, Bobby Nazief, S. M.M. Tahaghoghi, and Hugh E. Williams. 2007. Stemming Indonesian: A confix-stripping approach. ACM Transactions on Asian Language Information Processing, 6:1–33. Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz-Josef Och, David Purdy, Noah A. Smith, and David Yarowsky. 1999. Statistical machine translation. Technical report, JHU Summer Workshop. Timothy Baldwin and Su’ad Awab. 2006. Open source corpus analysis tools for Malay. In Proceedings of the 5th International Conference on Language Resources and Evaluation, LREC ’06, pages 2212–2215. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263–311. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved statistical machine translation using paraphrases. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL ’06, pages 17–24. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, ACL ’05, pages 263–270. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, ACL ’05, pages 531–540. Christopher Dyer, Smaranda Muresan, and Philip Resnik. 2008. Generalizing word lattice translation. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL ’08, pages 1012– 1020. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Jonathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finitestate and context-free translation models. In Proceedings of the ACL 2010 System Demonstrations, ACL ’10, pages 7–12. Christopher Dyer. 2007. The ’noisier channel’: translation from morphologically complex languages. In Proceedings of the Second Workshop on Statistical Machine Translation, WMT ’07, pages 207–211. Chris Dyer. 2009. Using a maximum entropy model to build segmentation lattices for MT. In Proceedings of Human Language Technologies: The 2009 Annual 1306 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 406–414. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL ’04, pages 273–280. Sharon Goldwater and David McClosky. 2005. Improving statistical MT through morphological analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT-EMNLP ’05, pages 676–683. Nizar Habash and Fatiha Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, HLT-NAACL ’06, pages 49–52. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’07, pages 868–876. Philipp Koehn and Kevin Knight. 2003. Empirical methods for compound splitting. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’03, pages 187–193. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, NAACL ’03, pages 48–54. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume on Demo and Poster Sessions, ACL ’07, pages 177–180. Alon Lavie and Michael J. Denkowski. 2009. The meteor metric for automatic evaluation of machine translation. Machine Translation, 23:105–115. Young-Suk Lee. 2004. Morphological analysis for statistical machine translation. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, HLT-NAACL ’04, pages 57–60. Chang Liu, Daniel Dahlmeier, and Hwee Tou Ng. 2010. TESLA: Translation evaluation of sentences with linear-programming-based analysis. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT ’10, pages 354– 359. Preslav Nakov and Hwee Tou Ng. 2009. Improved statistical machine translation for resource-poor languages using related resource-rich languages. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, EMNLP ’09, pages 1358– 1367. Preslav Nakov. 2008. Improved statistical machine translation using monolingual paraphrases. In Proceedings of the 18th European Conference on Artificial Intelligence, ECAI ’08, pages 338–342. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, ACL ’03, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, ACL ’02, pages 311–318. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, ACL ’05, pages 271–279. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the Association for Machine Translation in the Americas, AMTA ’06, pages 223–231. David Talbot and Miles Osborne. 2006. Modelling lexical redundancy for machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, COLINGACL ’06, pages 969–976. Hua Wu and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine translation. Machine Translation, 21(3):165–181. Mei Yang and Katrin Kirchhoff. 2006. Phrase-based backoff models for machine translation of highly inflected languages. In Proceedings of the European Chapter of the Association for Computational Linguistics, EACL ’06, pages 41–48. 1307
2011
130
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1308–1317, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Gappy Phrasal Alignment by Agreement Mohit Bansal∗ UC Berkeley, CS Division [email protected] Chris Quirk Microsoft Research [email protected] Robert C. Moore Google Research [email protected] Abstract We propose a principled and efficient phraseto-phrase alignment model, useful in machine translation as well as other related natural language processing problems. In a hidden semiMarkov model, word-to-phrase and phraseto-word translations are modeled directly by the system. Agreement between two directional models encourages the selection of parsimonious phrasal alignments, avoiding the overfitting commonly encountered in unsupervised training with multi-word units. Expanding the state space to include “gappy phrases” (such as French ne ⋆pas) makes the alignment space more symmetric; thus, it allows agreement between discontinuous alignments. The resulting system shows substantial improvements in both alignment quality and translation quality over word-based Hidden Markov Models, while maintaining asymptotically equivalent runtime. 1 Introduction Word alignment is an important part of statistical machine translation (MT) pipelines. Phrase tables containing pairs of source and target language phrases are extracted from word alignments, forming the core of phrase-based statistical machine translation systems (Koehn et al., 2003). Most syntactic machine translation systems extract synchronous context-free grammars (SCFGs) from aligned syntactic fragments (Galley et al., 2004; Zollmann et al., 2006), which in turn are derived from bilingual word alignments and syntactic ∗Author was a summer intern at Microsoft Research during this project. French English voudrais voyager par chemin de fer would like traveling by railroad ne pas not Figure 1: French-English pair with complex word alignment. parses. Alignment is also used in various other NLP problems such as entailment, paraphrasing, question answering, summarization and spelling correction. A limitation to word-based alignment is undesirable. As seen in the French-English example in Figure 1, many sentence pairs are naturally aligned with multi-word units in both languages (chemin de fer; would ⋆like, where ⋆indicates a gap). Much work has addressed this problem: generative models for direct phrasal alignment (Marcu and Wong, 2002), heuristic word-alignment combinations (Koehn et al., 2003; Och and Ney, 2003), models with pseudoword collocations (Lambert and Banchs, 2006; Ma et al., 2007; Duan et al., 2010), synchronous grammar based approaches (Wu, 1997), etc. Most have a large state-space, using constraints and approximations for efficient inference. We present a new phrasal alignment model based on the hidden Markov framework (Vogel et al., 1996). Our approach is semi-Markov: each state can generate multiple observations, representing wordto-phrase alignments. We also augment the state space to include contiguous sequences. This corresponds to phrase-to-word and phrase-to-phrase alignments. We generalize alignment by agreement (Liang et al., 2006) to this space, and find that agreement discourages EM from overfitting. Finally, we make the alignment space more symmetric by including gappy (or non-contiguous) phrases. This allows agreement to reinforce non-contiguous align1308 f1 f2 f3 e1 e2 e3 f1 f2 f3 e1 e2 e3 Observations→ ? ? States→ HMM(E|F) HMM(F|E) Figure 2: The model of E given F can represent the phrasal alignment {e1, e2} ∼{f1}. However, the model of F given E cannot: the probability mass is distributed between {e1} ∼ {f1} and {e2} ∼{f1}. Agreement of the forward and backward HMM alignments tends to place less mass on phrasal links and greater mass on word-to-word links. ments, such English not to French ne ⋆pas. Pruning the set of allowed phrases preserves the time complexity of the word-to-word HMM alignment model. 1.1 Related Work Our first major influence is that of conditional phrase-based models. An early approach by Deng and Byrne (2005) changed the parameterization of the traditional word-based HMM model, modeling subsequent words from the same state using a bigram model. However, this model changes only the parameterization and not the set of possible alignments. More closely related are the approaches of Daum´e III and Marcu (2004) and DeNero et al. (2006), which allow phrase-to-phrase alignments between the source and target domain. As DeNero warns, though, an unconstrained model may overfit using unusual segmentations. Interestingly, the phrase-based hidden semi-Markov model of Andr´es-Ferrer and Juan (2009) does not seem to encounter these problems. We suspect two main causes: first, the model interpolates with Model 1 (Brown et al., 1994), which may help prevent overfitting, and second, the model is monotonic, which screens out many possible alignments. Monotonicity is generally undesirable, though: almost all parallel sentences exhibit some reordering phenomena, even when languages are syntactically very similar. The second major inspiration is alignment by agreement by Liang et al. (2006). Here, soft intersection between the forward (F→E) and backward (E→F) alignments during parameter estimation produces better word-to-word correspondences. This unsupervised approach produced alignments with incredibly low error rates on French-English, though only moderate gains in end-to-end machine translation results. Likely this is because the symmetric portion of the HMM space contains only single word to single word links. As shown in Figure 2, in order to retain the phrasal link f1 ∼e1, e2 after agreement, we need the reverse phrasal link e1, e2 ∽f1 in the backward direction. However, this is not possible in a word-based HMM where each observation must be generated by a single state. Agreement tends to encourage 1-to-1 alignments with very high precision and but lower recall. As each word alignment acts as a constraint on phrase extraction, the phrase-pairs obtained from those alignments have high recall and low precision. 2 Gappy Phrasal Alignment Our goal is to unify phrasal alignment and alignment by agreement. We use a phrasal hidden semiMarkov alignment model, but without the monotonicity requirement of Andr´es-Ferrer and Juan (2009). Since phrases may be used in both the state and observation space of both sentences, agreement during EM training no longer penalizes phrasal links such as those in Figure 2. Moreover, the benefits of agreement are preserved: meaningful phrasal links that are likely in both directions of alignment will be reinforced, while phrasal links likely in only one direction will be discouraged. This avoids segmentation problems encountered by DeNero et al. (2006). Non-contiguous sequences of words present an additional challenge. Even a semi-Markov model with phrases can represent the alignment between English not and French ne ⋆pas in one direction only. To make the model more symmetric, we extend the state space to include gappy phrases as well.1 The set of alignments in each model becomes symmetric, though the two directions model gappy phrases differently. Consider not and ne ⋆pas: when predicting French given English, the alignment corresponds to generating multiple distinct ob1We only allow a single gap with one word on each end. This is sufficient for the vast majority of the gapped phenomena that we have seen in our training data. 1309 voudrais voyager par chemin de fer would like traveling by railroad C would like traveling by railroad voudrais voyager par chemin de fer not pas ne not ne pas Observations→ States→ Observations→ States→ Figure 3: Example English-given-French and French-given-English alignments of the same sentence pair using the Hidden SemiMarkov Model (HSMM) for gapped-phrase-to-phrase alignment. It allows the state side phrases (denoted by vertical blocks), observation side phrases (denoted by horizontal blocks), and state-side gaps (denoted by discontinuous blocks in the same column connected by a hollow vertical “bridge”). Note both directions can capture the desired alignment for this sentence pair. servations from the same state; in the other direction, the word not is generated by a single gappy phrase ne ⋆pas. Computing posteriors for agreement is somewhat complicated, so we resort to an approximation described later. Exact inference retains a low-order polynomial runtime; we use pruning to increase speed. 2.1 Hidden Markov Alignment Models Our model can be seen as an extension of the standard word-based Hidden Markov Model (HMM) used in alignment (Vogel et al., 1996). To ground the discussion, we first review the structure of that model. This generative model has the form p(O|S) = P A p(A, O|S), where S = (s1, . . . , sI) ∈Σ⋆is a sequence of words from a vocabulary Σ; O = (o1, . . . , oJ) ∈Π⋆is a sequence from vocabulary Π; and A = (a1, . . . , aJ) is the alignment between the two sequences. Since some words are systematically inserted during translation, the target (state) word sequence is augmented with a special NULL word. To retain the position of the last aligned word, the state space contains I copies of the NULL word, one for each position (Och and Ney, 2003). The alignment uses positive positions for words and negative positions for NULL states, so aj ∈{1..I} ∪{−1.. −I}, and si = NULL if i < 0. It uses the following generative procedure. First the length of the observation sequence is selected based on pl(J|I). Then for each observation position, the state is selected based on the prior state: a null state with probability p0, or a non-null state at position aj with probability (1 −p0) · pj(aj|aj−1) where pj is a jump distribution. Finally the observation word oj at that position is generated with probability pt(oj|saj), where pt is an emission distribution: p(A, O|S) = pl(J|I) J Y j=1 pj(aj|aj−1)pt(oj|saj) pj(a|a′) = ( (1 −p0) · pd(a −|a′|) a > 0 p0 · δ(|a|, |a′|) a < 0 We pick p0 using grid search on the development set, pl is uniform, and the pj and pt are optimized by EM.2 2.2 Gappy Semi-Markov Models The HMM alignment model identifies a wordto-word correspondence between the observation 2Note that jump distances beyond -10 or 10 share a single parameter to prevent sparsity. 1310 words and the state words. We make two changes to expand this model. First, we allow contiguous phrases on the observation side, which makes the model semi-Markov: at each time stamp, the model may emit more than one observation word. Next, we also allow contiguous and gappy phrases on the state side, leading to an alignment model that can retain phrasal links after agreement (see Section 4). The S and O random variables are unchanged. Since a single state may generate multiple observation words, we add a new variable K representing the number of states. K should be less than J, the number of observations. The alignment variable is augmented to allow contiguous and non-contiguous ranges of words. We allow only a single gap, but of unlimited length. The null state is still present, and is again represented by negative numbers. A =(a1, . . . , aK) ∈A(I) A(I) ={(i1, i2, g)|0 < i1 ≤i2 ≤I, g ∈{GAP, CONTIG}}∪ {(−i, −i, CONTIG) | 0 < i ≤I} We add one more random variable to capture the total number of observations generated by each state. L ∈{(l0, l1, . . . , lK) | 0 = l0 < · · · < lK = J} The generative model takes the following form: p(A, L, O|S) =pl(J|I)pf(K|J) K Y k=1 pj(ak|ak−1)· pt(lk, olk lk−1+1|S[ak], lk−1) First, the length of the observation sequence (J) is selected, based on the number of words in the state-side sentence (I). Since it does not affect the alignment, pl is modeled as a uniform distribution. Next, we pick the total number of states to use (K), which must be less than the number of observations (J). Short state sequences receive an exponential penalty: pf(K|J) ∝η(J−K) if 0 ≤K ≤J, or 0 otherwise. A harsh penalty (small positive value of η) may prevent the systematic overuse of phrases.3 3We found that this penalty was crucial to prevent overfitting in independent training. Joint training with agreement made it basically unnecessary. Next we decide the assignment of each state. We retain the first-order Markov assumption: the selection of each state is conditioned only on the prior state. The transition distribution is identical to the word-based HMM for single word states. For phrasal and gappy states, we jump into the first word of that state, and out of the last word of that state, and then pay a cost according to how many words are covered within that state. If a = (i1, i2, g), then the beginning word of a is F(a) = i1, the ending word is L(a) = i2, and the length N(a) is 2 for gapped states, 0 for null states, and last(a) − first(a) + 1 for all others. The transition probability is: pj(a|a′) =      p0 · δ(|F(a)|, |L(a′)|) if F(a) < 0 (1 −p0)pd(F(a) −|L(a′)|)· pn(N(a)) otherwise where pn(c) ∝κc is an exponential distribution. As in the word HMM case, we use a mixture parameter p0 to determine the likelihood of landing in a NULL state. The position of that NULL state remembers the last position of the prior state. For non-null words, we pick the first word of the state according to the distance from the last word of the prior state. Finally, we pick a length for that final state according to an exponential distribution: values of κ less than one will penalize the use of phrasal states. For each set of state words, we maintain an emission distribution over observation word sequences. Let S[a] be the set of state words referred to by the alignment variable a. For example, the English given French alignment of Figure 3 includes the following state word sets: S[(2, 2, CONTIG)] = voudrais S[(1, 3, GAP)] = ne ⋆pas S[(6, 8, CONTIG)] = chemin de fer For the emission distribution we keep a multinomial over observation phrases for each set of state words: p(l, ol l′|S[a], l′) ∝c(ol l′|S[a]) In contrast to the approach of Deng and Byrne (2005), this encourages greater consistency across instances, and more closely resembles the commonly used phrasal translation models. 1311 We note in passing that pf(K|J) may be moved inside the product: pf(K|J) ∝ η(J−K) = QK k=1 η(lk−lk−1−1). The following form derived using the above rearrangement is helpful during EM. p(A, L, O|S) ∝ K Y k=1 pj(ak|ak−1)· pt(lk, olk lk−1+1|S[ak], lk−1)· η(lk−lk−1−1) where lk −lk−1 −1 is the length of the observation phrase emitted by state S[ak]. 2.3 Minimality At alignment time we focus on finding the minimal phrase pairs, under the assumption that composed phrase pairs can be extracted in terms of these minimal pairs. We are rather strict about this, allowing only 1 →k and k →1 phrasal alignment edges (or links). This should not cause undue stress, since edges of the form 2 −3 (say e1e2 ∼f1f2f3) can generally be decomposed into 1 −1 ∪1 −2 (i.e., e1 ∼f1 ∪e2 ∼f2f3), etc. However, the model does not require this to be true: we will describe reestimation for unconstrained general models, but use the limited form for word alignment. 3 Parameter Estimation We use Expectation-Maximization (EM) to estimate parameters. The forward-backward algorithm efficiently computes posteriors of transitions and emissions in the word-based HMM. In a standard HMM, emission always advances the observation position by one, and the next transition is unaffected by the emission. Neither of these assumptions hold in our model: multiple observations may be emitted at a time, and a state may cover multiple stateside words, which affects the outgoing transition. A modified dynamic program computes posteriors for this generalized model. The following formulation of the forwardbackward algorithm for word-to-word alignment is a good starting point. α[x, 0, y] indicates the total mass of paths that have just transitioned into state y at observation x but have not yet emitted; α[x, 1, y] represents the mass after emission but before subsequent transition. β is defined similarly. (We omit NULL states for brevity; the extension is straightforward.) α[0, 0, y] = pj(y|INIT) α[x, 1, y] = α[x, 0, y] · pt(ox|sy) α[x, 0, y] = X y′ α[x −1, 1, y′] · pj(y|y′) β[n, 1, y] = 1 β[x, 0, y] = pt(ox|sy) · β[x, 1, y] β[x, 1, y] = X y′ pj(y′|y) · β[x + 1, 0, y′] Not only is it easy to compute posteriors of both emissions (α[x, 0, y]pt(ox|sy)β[x, 1, y]) and transitions (α[x, 1, y]pj(y′|y)β[x + 1, 0, y′]) with this formulation, it also simplifies the generalization to complex emissions. We update the emission forward probabilities to include a search over the possible starting points in the state and observation space: α[0, 0, y] =pj(y|INIT) α[x, 1, y] = X x′<x,y′≤y α[x′, 0, y′] · EMIT(x′ : x, y′ : y) α[x, 0, y] = X y′ α[x −1, 1, y′] · pj(y|y′) β[n, 1, y] =1 β[x′, 0, y′] = X x′<x,y′≤y EMIT(x′ : x, y′ : y) · β[x, 1, y] β[x, 1, y] = X y′ pj(y′|y) · β[x + 1, 0, y′] Phrasal and gapped emissions are pooled into EMIT: EMIT(w : x, y : z) =pt(ox w|sz y) · ηz−y+1 · κx−w+1+ pt(ox w|sy ⋆sz) · η2 · κx−w+1 The transition posterior is the same as above. The emission is very similar: the posterior probability that ox w is aligned to sz y is proportional to α[w, 0, y] · pt(ox w|sz y)·ηz−y+1·κx−w+1·β[x, 1, z]. For a gapped phrase, the posterior is proportional to α[w, 0, y] · pt(ox w|sy ⋆sz) · η2 · κx−w+1 · β[x, 1, z]. Given an inference procedure for computing posteriors, unsupervised training with EM follows immediately. We use a simple maximum-likelihood update of the parameters using expected counts based on the posterior distribution. 1312 4 Alignment by Agreement Following Liang et al. (2006), we quantify agreement between two models as the probability that the alignments produced by the two models agree on the alignment z of a sentence pair x = (S, O): X z p1(z|x; θ1)p2(z|x; θ2) To couple the two models, the (log) probability of agreement is added to the standard log-likelihood objective: max θ1,θ2 X x h log p1(x; θ1) + log p2(x; θ2)+ log X z p1(z|x; θ1)p2(z|x; θ2) i We use the heuristic estimator from Liang et al. (2006), letting q be a product of marginals: E : q(z; x) := Y z∈z p1(z|x; θ1)p2(z|x; θ2) where each pk(z|x; θk) is the posterior marginal of some edge z according to each model. Such a heuristic E step computes the marginals for each model separately, then multiplies the marginals corresponding to the same edge. This product of marginals acts as the approximation to the posterior used in the M step for each model. The intuition is that if the two models disagree on a certain edge z, then the marginal product is small, hence that edge is dis-preferred in each model. Contiguous phrase agreement. It is simple to extend agreement to alignments in the absence of gaps. Multi-word (phrasal) links are assigned some posterior probability in both models, as shown in the example in Figure 3, and we multiply the posteriors of these phrasal links just as in the single word case.4 γF→E(fi, ej) := γE→F (ej, fi) := [γF→E(fi, ej) × γE→F (ej, fi)] 4Phrasal correspondences can be represented in multiple ways: multiple adjacent words could be generated from the same state either using one semi-Markov emission, or using multiple single word emissions followed by self-jumps. Only the first case is reinforced through agreement, so the latter is implicitly discouraged. We explored an option to forbid samestate transitions, but found it made little difference in practice. Gappy phrase agreement. When we introduce gappy phrasal states, agreement becomes more challenging. In the forward direction F→E, if we have a gappy state aligned to an observation, say fi ⋆fj ∼ ek, then its corresponding edge in the backward direction E→F would be ek ∽fi ⋆ fj. However, this is represented by two distinct and unrelated emissions. Although it is possible the compute the posterior probability of two non-adjacent emissions, this requires running a separate dynamic program for each such combination to sum the mass between these emissions. For the sake of efficiency we resort to an approximate computation of posterior marginals using the two word-to-word edges ek ∽fi and ek ∽fj. The forward posterior γF→E for edge fi ⋆fj ∼ ek is multiplied with the min of the backward posteriors of the edges ek ∽fi and ek ∽fj. γF→E(fi ⋆fj, ek) := γF→E(fi ⋆fj, ek)× min n γE→F (ek, fi), γE→F (ek, fj) o Note that this min is an upper bound on the desired posterior of edge ek ∽fi ⋆fj, since every path that passes through ek ∽fi and ek ∽fj must pass through ek ∽fi, therefore the posterior of ek ∽ fi ⋆fj is less than that of ek ∽fi, and likewise less than that of ek ∽fj. The backward posteriors of the edges ek ∽fi and ek ∽fj are also mixed with the forward posteriors of the edges to which they correspond. γE→F (ek, fi) := γE→F (ek, fi) × " γF→E(fi, ek)+ X h<i<j n γF→E(fh ⋆fi, ek) + γF→E(fi ⋆fj, ek) o# 5 Pruned Lists of ‘Allowed’ Phrases To identify contiguous and gapped phrases that are more likely to lead to good alignments, we use wordto-word HMM alignments from the full training data in both directions (F→E and E→F). We collect observation phrases of length 2 to K aligned to a single state, i.e. oj i ∼s, to add to a list of allowed phrases. For gappy phrases, we find all non-consecutive observation pairs oi and oj such that: (a) both are 1313 aligned to the same state sk, (b) state sk is aligned to only these two observations, and (c) at least one observation between oi and oj is aligned to a non-null state other than sk. These observation phrases are collected from F→E and E→F models to build contiguous and gappy phrase lists for both languages. Next, we order the phrases in each contiguous list using the discounted probability: pδ(oj i ∼s|oj i) = max(0, count(oj i ∼s) −δ) count(oj i) where count(oj i ∼s) is the count of occurrence of the observation-phrase oj i, all aligned to some single state s, and count(oj i) is the count of occurrence of the observation phrase oj i, not all necessarily aligned to a single state. Similarly, we rank the gappy phrases using the discounted probability: pδ(oi ⋆oj ∼s|oi ⋆oj) = max(0, count(oi ⋆oj ∼s) −δ) count(oi ⋆oj) where count(oi ⋆oj ∼s) is the count of occurrence of the observations oi and oj aligned to a single state s with the conditions mentioned above, and count(oi ⋆oj) is the count of general occurrence of the observations oi and oj in order. We find that 200 gappy phrases and 1000 contiguous phrases works well, based on tuning with a development set. 6 Complexity Analysis Let m be the length of the state sentence S and n be the length of the observation sentence O. In IBM Model 1 (Brown et al., 1994), with only a translation model, we can infer posteriors or max alignments in O(mn). HMM-based word-to-word alignment model (Vogel et al., 1996) adds a distortion model, increasing the complexity to O(m2n). Introducing phrases (contiguous) on the observation side, we get a HSMM (Hidden Semi-Markov Model). If we allow phrases of length no greater than K, then the number of observation types rises from n to Kn for an overall complexity of O(m2Kn). Introducing state phrases (contiguous) with length ≤K grows the number of state types from m to Km. Complexity further increases to O((Km)2Kn) = O(K3m2n). Finally, when we introduce gappy state phrases of the type si ⋆ sj, the number of such phrases is O(m2), since we may choose a start and end point independently. Thus, the total complexity rises to O((Km + m2)2Kn) = O(Km4n). Although this is less than the O(n6) complexity of exact ITG (Inversion Transduction Grammar) model (Wu, 1997), a quintic algorithm is often quite slow. The pruned lists of allowed phrases limit this complexity. The model is allowed to use observation (contiguous) and state (contiguous and gappy) phrases only from these lists. The number of phrases that match any given sentence pair from these pruned lists is very small (∼2 to 5). If the number of phrases in the lists that match the observation and state side of a given sentence pair are small constants, the complexity remains O(m2n), equal to that of word-based models. 7 Results We evaluate our models based on both word alignment and end-to-end translation with two language pairs: English-French and English-German. For French-English, we use the Hansards NAACL 2003 shared-task dataset, which contains nearly 1.1 million training sentence pairs. We also evaluated on German-English Europarl data from WMT2010, with nearly 1.6 million training sentence pairs. The model from Liang et al. (2006) is our word-based baseline. 7.1 Training Regimen Our training regimen begins with both the forward (F→E) and backward (E→F) iterations of Model 1 run independently (i.e. without agreement). Next, we train several iterations of the forward and backward word-to-word HMMs, again with independent training. We do not use agreement during word alignment since it tends to produce sparse 1-1 alignments, which in turn leads to low phrase emission probabilities in the gappy model. Initializing the emission probabilities of the semiMarkov model is somewhat complicated, since the word-based models do not assign any mass to the phrasal or gapped configurations. Therefore we use a heuristic method. We first retrieve the Viterbi alignments of the forward and backward 1314 word-to-word HMM aligners. For phrasal correspondences, we combine these forward and backward Viterbi alignments using a common heuristic (Union, Intersection, Refined, or Grow-DiagFinal), and extract tight phrase-pairs (no unaligned words on the boundary) from this alignment set. We found that Grow-Diag-Final was most effective in our experiments. The counts gathered from this phrase extraction are used to initialize phrasal translation probabilities. For gappy states in a forward (F→E) model, we use alignments from the backward (E→F) model. If a state sk is aligned to two non-consecutive observations oi and oj such that sk is not aligned to any other observation, and at least one observation between oi and oj is aligned to a non-null state other than sk, then we reverse this link to get oi ⋆oj ∼sk and use it as a gappedstate-phrase instance for adding fractional counts. Given these approximate fractional counts, we perform a standard MLE M-step to initialize the emission probability distributions. The distortion probabilities from the word-based model are used without changes. 7.2 Alignment Results (F1) The validation and test sentences have been handaligned (see Och and Ney (2003)) and are marked with both sure and possible alignments. For FrenchEnglish, following Liang et al. (2006), we lowercase all words, and use the validation set plus the first 100 test sentences as our development set and the remaining 347 test-sentences as our test-set for final F1 evaluation.5 In German-English, we have a development set of 102 sentences, and a test set of 258 sentences, also annotated with a set of sure and possible alignments. Given a predicted alignment A, precision and recall are computed using sure alignments S and possible alignments P (where S ⊆P) as in Och and Ney (2003): Precision = |A ∩P| |A| × 100% Recall = |A ∩S| |S| × 100% 5We report F1 rather than AER because AER appears not to correlate well with translation quality.(Fraser and Marcu, 2007) Language pair Word-to-word Gappy French-English 34.0 34.5 German-English 19.3 19.8 Table 2: BLEU results on German-English and French-English. AER =  1 −|A ∩S| + |A ∩P| |A| + |S|  × 100% F1 = 2 × Precision × Recall Precision + Recall × 100% Many free parameters were tuned to optimize alignment F1 on the development set, including the number of iterations of each Model 1, HMM, and Gappy; the NULL weight p0, the number of contiguous and gappy phrases to include, and the maximum phrase length. Five iterations of all models, p0 = 0.3, using the top 1000 contiguous phrases and the top 200 gappy phrases, maximum phrase length of 5, and penalties η = κ = 1 produced competitive results. Note that by setting η and κ to one, we have effectively removed the penalty altogether without affecting our results. In Table 1 we see a consistent improvement with the addition of contiguous phrases, and some additional gains with gappy phrases. 7.3 Translation Results (BLEU) We assembled a phrase-based system from the alignments (using only contiguous phrases consistent with the potentially gappy alignment), with 4 channel models, word and phrase count features, distortion penalty, lexicalized reordering model, and a 5-gram language model, weighted by MERT. The same free parameters from above were tuned to optimize development set BLEU using grid search. The improvements in Table 2 are encouraging, especially as a syntax-based or non-contiguous phrasal system (Galley and Manning, 2010) may benefit more from gappy phrases. 8 Conclusions and Future Work We have described an algorithm for efficient unsupervised alignment of phrases. Relatively straightforward extensions to the base HMM allow for efficient inference, and agreement between the two 1315 Data Decoding method Word-to-word +Contig phrases +Gappy phrases FE 10K Viterbi 89.7 90.6 90.3 FE 10K Posterior ≥0.1 90.1 90.4 90.7 FE 100K Viterbi 93.0 93.6 93.8 FE 100K Posterior ≥0.1 93.1 93.7 93.8 FE All Viterbi 94.1 94.3 94.3 FE All Posterior ≥0.1 94.2 94.4 94.5 GE 10K Viterbi 76.2 79.6 79.7 GE 10K Posterior ≥0.1 76.7 79.3 79.3 GE 100K Viterbi 81.0 83.0 83.2 GE 100K Posterior ≥0.1 80.7 83.1 83.4 GE All Viterbi 83.0 85.2 85.6 GE All Posterior ≥0.1 83.7 85.3 85.7 Table 1: F1 scores of automatic word alignments, evaluated on the test set of the hand-aligned sentence pairs. models prevents EM from overfitting, even in the absence of harsh penalties. We also allow gappy (noncontiguous) phrases on the state side, which makes agreement more successful but agreement needs approximation of posterior marginals. Using pruned lists of good phrases, we maintain complexity equal to the baseline word-to-word model. There are several steps forward from this point. Limiting the gap length also prevents combinatorial explosion; we hope to explore this in future work. Clearly a translation system that uses discontinuous mappings at runtime (Chiang, 2007; Galley and Manning, 2010) may make better use of discontinuous alignments. This model can also be applied at the morpheme or character level, allowing joint inference of segmentation and alignment. Furthermore the state space could be expanded and enhanced to include more possibilities: states with multiple gaps might be useful for alignment in languages with template morphology, such as Arabic or Hebrew. More exploration in the model space could be useful – a better distortion model might place a stronger distribution on the likely starting and ending points of phrases. Acknowledgments We would like to thank the anonymous reviewers for their helpful suggestions. This project is funded by Microsoft Research. References Jes´us Andr´es-Ferrer and Alfons Juan. 2009. A phrasebased hidden semi-Markov approach to machine translation. In Proceedings of EAMT. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1994. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19:263–311. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics. Hal Daum´e III and Daniel Marcu. 2004. A phrase-based HMM approach to document/abstract alignment. In Proceedings of EMNLP. John DeNero, Dan Gillick, James Zhang, and Dan Klein. 2006. Why generative phrase models underperform surface heuristics. In Proceedings of ACL. Yonggang Deng and William Byrne. 2005. HMM word and phrase alignment for statistical machine translation. In Proceedings of HLT-EMNLP. Xiangyu Duan, Min Zhang, and Haizhou Li. 2010. Pseudo-word for phrase-based machine translation. In Proceedings of ACL. Alexander Fraser and Daniel Marcu. 2007. Measuring word alignment quality for statistical machine translation. Computational Linguistics, 33(3):293–303. Michel Galley and Christopher D. Manning. 2010. Accurate non-hierarchical phrase-based translation. In HLT/NAACL. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proceedings of HLT-NAACL. Philipp Koehn, Franz Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of HLT-NAACL. Patrik Lambert and Rafael Banchs. 2006. Grouping multi-word expressions according to part-of-speech in 1316 statistical machine translation. In Proc. of the EACL Workshop on Multi-Word-Expressions in a Multilingual Context. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of HLT-NAACL. Yanjun Ma, Nicolas Stroppa, and Andy Way. 2007. Boostrapping word alignment via word packing. In Proceedings of ACL. Daniel Marcu and Daniel Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proceedings of EMNLP. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29:19–51. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of COLING. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23:377–404. Andreas Zollmann, Ashish Venugopal, and Stephan Vogel. 2006. Syntax augmented machine translation via chart parsing. In Processings of the Statistical Machine Translation Workshop at NAACL. 1317
2011
131
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1318–1326, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Translationese and Its Dialects Moshe Koppel Noam Ordan Department of Computer Science Department of Computer Science Bar Ilan University University of Haifa Ramat-Gan, Israel 52900 Haifa, Israel 31905 [email protected] [email protected] Abstract While it is has often been observed that the product of translation is somehow different than non-translated text, scholars have emphasized two distinct bases for such differences. Some have noted interference from the source language spilling over into translation in a source-language-specific way, while others have noted general effects of the process of translation that are independent of source language. Using a series of text categorization experiments, we show that both these effects exist and that, moreover, there is a continuum between them. There are many effects of translation that are consistent among texts translated from a given source language, some of which are consistent even among texts translated from families of source languages. Significantly, we find that even for widely unrelated source languages and multiple genres, differences between translated texts and non-translated texts are sufficient for a learned classifier to accurately determine if a given text is translated or original. 1 Introduction The products of translation (written or oral) are generally assumed to be ontologically different from non-translated texts. Researchers have emphasized two aspects of this difference. Some (Baker 1993) have emphasized general effects of the process of translation that are independent of source language and regard the collective product of this process in a given target language as an „interlanguage‟ (Selinker, 1972), „third code‟ (Frawley, 1984) or „translationese‟ (Gellerstam, 1986). Others (Toury, 1995) have emphasized the effects of interference, the process by which a specific source language leaves distinct marks or fingerprints in the target language, so that translations from different source languages into the same target language may be regarded as distinct dialects of translationese. We wish to use text categorization methods to set both of these claims on a firm empirical foundation. We will begin by bringing evidence for two claims: (1) Translations from different source languages into the same target language are sufficiently different from each other for a learned classifier to accurately identify the source language of a given translated text; (2) Translations from a mix of source languages are sufficiently distinct from texts originally written in the target language for a learned classifier to accurately determine if a given text is translated or original. Each of these claims has been made before, but our results will strengthen them in a number of ways. Furthermore, we will show that the degree of difference between translations from two source languages reflects the degree of difference between the source languages themselves. Translations from cognate languages differ from non-translated texts in similar ways, while translations from unrelated languages differ from non-translated texts in distinct ways. The same result holds for families of languages. The outline of the paper is as follows. In the following section, we show that translations from different source languages can be distinguished from each other and that closely related source languages manifest similar forms of interference. In section 3, we show that, in a corpus involving five European languages, we can distinguish translationese from non-translated text and we consider some salient markers of translationese. In section 1318 4, we consider the extent to which markers of translationese cross over into non-European languages as well as into different genres. Finally, we consider possible applications and implications for future studies. 2 Interference Effects in Translationese In this section, we perform several text categorization experiments designed to show the extent to which interference affects (both positively and negatively) our ability to classify documents. 2.1 The Europarl Corpus The main corpus we will use throughout this paper is Europarl (Koehn, 2005), which consists of transcripts of addresses given in the European Parliament. The full corpus consists of texts translated into English from 11 different languages (and vice versa), as well as texts originally produced in English. For our purposes, it will be sufficient to use translations from five languages (Finnish, French, German, Italian and Spanish), as well as original English. We note that this corpus constitutes a comparable corpus (Laviosa, 1997), since it contains (1) texts written originally in a certain language (English), as well as (2) texts translated into that same language, matched for genre, domain, publication timeframe, etc. Each of the five translated components is a text file containing just under 500,000 words; the original English component is a file of the same size as the aggregate of the other five. The five source languages we use were selected by first eliminating several source languages for which the available text was limited and then choosing from among the remaining languages, those of varying degrees of pairwise similarity. Thus, we select three cognate (Romance) languages (French, Italian and Spanish), a fourth less related language (German), and a fifth even further removed (Finnish). As will become clear, the motivation is to see whether the distance between the languages impacts the distinctiveness of the translation product. We divide each of the translated corpora into 250 equal chunks, paying no attention to natural units within the corpus. Similarly, we divide the original English corpus into 1250 equal chunks. We set aside 50 chunks from each of the translated corpora and 250 chunks from the original English corpus for development purposes (as will be explained below). The experiments described below use the remaining 1000 translated chunks and 1000 original English chunks. 2.2 Identifying source language Our objective in this section is to measure the extent to which translations are affected by source language. Our first experiment will be to use text categorization methods to learn a classifier that categorizes translations according to source language. We will check the accuracy of such classifiers on out-of-sample texts. High accuracy would reflect that there are exploitable differences among translations of otherwise comparable texts that differ only in terms of source language. The details of the experiment are as follows. We use the 200 chunks from each translated corpus, as described above. We use as our feature set a list of 300 function words taken from LIWC (Pennebaker, 2001) and represent each chunk as a vector of size 300 in which each entry represents the frequency of the corresponding feature in the chunk. The restriction to function words is crucial; we wish to rely only on stylistic differences rather than content differences that might be artifacts of the corpus. We use Bayesian logistic regression (Madigan, 2005) as our learning method in order to learn a classifier that classifies a given text into one of five classes representing the different source languages. We use 10-fold cross-validation as our testing method. We find that 92.7% of documents are correctly classified. In Table 1 we show the confusion matrix for the five languages. As can be seen, there are more mistakes across the three cognate languages than between those three languages and German and still fewer mistakes involving the more distant Finnish language. It Fr Es De Fi It 169 19 8 4 0 Fr 18 161 12 8 1 Es 3 11 172 11 3 De 4 12 3 178 3 Fi 0 1 2 5 192 Table 1: Confusion matrix for 10-fold cross validation experiment to determine source language of texts translated into English 1319 This result strengthens that of van Halteren (2008) in a similar experiment. Van Halteren, also using Europarl (but with Dutch as the fifth source language, rather than Finnish), obtained accuracy of 87.2%-96.7% for a two-way decision on source language, and 81.5%-87.4% for a six-way decision (including the original which has no source language). Significantly, though, van Halteren‟s feature set included content words and he notes that many of the most salient differences reflected differences in thematic emphasis. By restricting our feature set to function words, we neutralize such effects. In Table 2, we show the two words most overrepresented and the two words most underrepresented in translations from each source language (ranked according to an unpaired T-test). For each of these, the difference between frequency of use in the indicated language and frequency of use in the other languages in aggregate is significant at p<0.01. over-represented under-represented Fr of, finally here, also It upon, moreover also, here Es with, therefore too, then De here, then of, moreover Fi be, example me, which Table 2: Most salient markers of translations from each source language. The two most underrepresented words for French and Italian, respectively, are in fact identical. Furthermore, the word too which is underrepresented for Spanish is a near synonym of also which appears in both French and Spanish. This suggests the possibility that interference effects in cognate languages such as French, Italian and Spanish might be similar. We will see presently that this is in fact the case. When a less related language is involved we see the opposite picture. For German, both underrepresented items appear as overrepresented in the Romance languages, and, conversely, underrepresented items in the Romance languages appear as overrepresented items for German. This may cast doubt on the idea that all translations share universal properties and that at best we may claim that particular properties are shared by closely related languages but not others. In the experiments presented in the next subsection, we‟ll find that translationese is gradable: closely related languages share more features, yet even further removed languages share enough properties to hold the general translationese hypothesis as valid. 2.3 Identifying translationese per source language We now wish to measure in a subtler manner the extent to which interference affects translation. In this experiment, the challenge is to learn a classifier that classifies a text as belonging to one of only two classes: original English (O) or translated-intoEnglish (T). The catch is that all our training texts for the class T will be translations from some fixed source language, while all our test documents in T will be translations from a different source language. What accuracy can be achieved in such an experiment? The answer to this question will tell us a great deal about how much of translationese is general and how much of it is language dependent. If accuracy is close to 100%, translationese is purely general (Baker, 1993). (We already know from the previous experiment that that's not the case.). If accuracy is near 50%, there are no general effects, just language-dependent ones. Note that, whereas in our first experiment above pair-specific interference facilitated good classification, in this experiment pair-specific interference is an impediment to good classification. The details of the experiment are as follows. We create, for example, a “French” corpus consisting of the 200 chunks of text translated from French and 200 original English texts. We similarly create a corpus for each of the other source languages, taking care that each of the 1000 original English texts appears in exactly one of the corpora. As above, we represent each chunk in terms of frequencies of function words. Now, using Bayesian logistic regression, we learn a classifier that distinguishes T from O in the French corpus. We then apply this learned classifier to the texts in, for example, the equivalent “Italian” corpus to see if we can classify them as translated or original. We repeat this for each of the 25 train_corpus, test_corpus pairs. In Table 3, we show the accuracy obtained for each such pair. (For the case where the training corpus and testing corpus are identical – the di1320 agonal of the matrix – we show results for ten-fold cross-validation.) We note several interesting facts. First, results of cross-validation within each corpus are very strong. For any given source language, it is quite easy to distinguish translations from original English. This corroborates results obtained by Baroni and Bernardini (2006), Ilisei et al. (2010), Kurokawa et al. (2009) and van Halteren (2008), which we will discuss below. We note further, that for the cases where we train on one source language and test on another, results are far worse. This clearly indicates that interference effects from one source language might be misleading when used to identify translations from a different language. Thus, for example, in the Finnish corpus, the word me is a strong indicator of original English (constituting 0.0003 of tokens in texts translated from Finnish as opposed to 0.0015 of tokens in original English texts), but in the German corpus, me is an indicator of translated text (constituting 0.0020 of tokens in text translated from German). The most interesting result that can be seen in this table is that the accuracy obtained when training using language x and testing using language y depends precisely on the degree of similarity between x and y. Thus, for training and testing within the three cognate languages, results are fairly strong, ranging between 84.5% and 91.5%. For training/testing on German and testing/training on one of the other European languages, results are worse, ranging from 68.5% to 83.3%. Finally, for training/testing on Finnish and testing/training on any of the European languages, results are still worse, hovering near 60% (with the single unexplained outlier for training on German and testing on Finnish). Finally, we note that even in the case of training or testing on Finnish, results are considerably better than random, suggesting that despite the confounding effects of interference, some general properties of translationese are being picked up in each case. We explore these in the following section. 3 General Properties of Translationese Having established that there are source-languagedependent effects on translations, let‟s now consider source-language-independent effects on translation. 3.1 Identifying translationese In order to identify general effects on translation, we now consider the same two-class classification problem as above, distinguishing T from O, except that now the translated texts in both our train and test data will be drawn from multiple source languages. If we succeed at this task, it must be because of features of translationese that cross source-languages. The details of our experiment are as follows. We use as our translated corpus, the 1000 translated chunks (200 from each of five source languages) and as our original English corpus all 1000 original English chunks. As above, we represent each chunk in terms of function words frequencies. We use Bayesian logistic regression to learn a twoclass classifier and test its accuracy using ten-fold cross-validation. Remarkably, we obtain accuracy of 96.7%. This result extends and strengthens results reported in some earlier studies. Ilisei et al. (2010), Kurokawa (2009) and van Halteren (2008) each obtained above 90% accuracy in distinguishing translation from original. However, in each case the translations were from a single source language. (Van Halteren considered multiple source languages, but each learned classifier used only one of them.) Thus, those results do not prove that translationese has distinctive source-languageindependent features. To our knowledge, the only earlier work that used a learned classifier to identify translations in which both test and train sets involved multiple source languages is Baroni and Bernardini (2006), in which the target language was Italian and the source languages were known to be varied. The actual distribution of source languages was, however, not known to the researchers. They obtained accuracy of 86.7%. Their result was obtained using combinations of lexical and syntactic features. Train It Fr Es De Fi It 98.3 91.5 86.5 71.3 61.5 Fr 91 97 86.5 68.5 60.8 Es 84.5 88.3 95.8 76.3 59.5 De 82 83.3 78.5 95 80.8 Fi 56 60.3 56 62.3 97.3 Table 3: Results of learning a T vs. O classifier using one source language and testing it using another source language 1321 3.2 Some distinguishing features Let us now consider some of the most salient function words for which frequency of usage in T differs significantly from that in O. While there are many such features, we focus on two categories of words that are most prominent among those with the most significant differences. First, we consider animate pronouns. In Table 4, we show the frequencies of animate pronouns in O and T, respectively (the possessive pronouns, mine, yours and hers, not shown, are extremely rare in the corpus). As can be seen, all pronouns are under-represented in T; for most (bolded), the difference is significant at p<0.01. By contrast, the word the is significantly overrepresented in T (15.32% in T vs. 13.73% in O; significant at p<0.01). word freq O freq T I 2.552% 2.148% we 2.713% 2.344% you 0.479% 0.470% he 0.286% 0.115% she 0.081% 0.039% me 0.148% 0.141% us 0.415% 0.320% him 0.066% 0.033% her 0.091% 0.056% my 0.462% 0.345% our 0.696% 0.632% your 0.119% 0.109% his 0.218% 0.123% Table 4: Frequency of pronouns in O and T in the Europarl corpus. Bold indicates significance at p<0.01. In Table 5, we consider cohesive markers, tagged as adverbs (Schmid, 2004). (These are adverbs that can appear at the beginning of a sentence followed immediately by a comma.) word freq O freq T therefore 0.153% 0.287% thus 0.015% 0.041% consequently 0.006% 0.014% hence 0.007% 0.013% accordingly 0.006% 0.011% however 0.216% 0.241% nevertheless 0.019% 0.045% also 0.460% 0.657% furthermore 0.012% 0.048% moreover 0.008% 0.036% indeed 0.098% 0.053% actually 0.065% 0.042% Table 5: Frequency of cohesive adverbs in O and T in the Europarl corpus. Bold indicates significance at p<0.01. We note that the preponderance of such cohesive markers are significantly more frequent in translations. In fact, we also find that a variety of phrases that serve the same purpose as cohesive adverbs, such as in fact and as a result are significantly more frequent in translationese. The general principle underlying these phenomena is subject to speculation. Previous researchers have noted the phenomenon of explicitation, according to which translators tend to render implicit utterances in the source text into explicit utterances in the target text (Blum-Kulka, 1986, Laviosa-Braithwaite, 1998), for example by filling out elliptical expressions or adding connectives to increase cohesion of the text (Laviosa-Braithwaite, 1998). It is plausible that the use of cohesive adverbs is an instantiation of this phenomenon. With regard to the under-representation of pronouns and the over-representation of the, there are a number of possible interpretations. It may be that this too is the result of explicitation, in which anaphora is resolved by replacing pronouns with noun phrases (e.g., the man instead of he). But it also might be that this is an example of simplification (Laviosa- Braithwaite 1998, Laviosa 2002), according to which the translator simplifies the message, the language, or both. Related results confirming the simplification hypothesis were found by Ilisei et al. (2010) on Spanish texts. In particular, they found that type-to-token ratio (lexical variety/richness), mean sentence length and proportion of grammatical words (lexical density/readability) are all smaller in translated texts. We note that Van Halteren (2008) and Kurokawa et al. (2009), who considered lexical features, found cultural differences, like over-representation of ladies and gentlemen in translated speeches. Such differences, while of general interest, are orthogonal to our purposes in this paper. 1322 3.3 Overriding language-specific effects We found in Section 2.3 that when we trained in one language and tested in another, classification succeeded to the extent that the source languages used in training and testing, respectively, are related to each other. In effect, general differences between translationese and original English were partially overwhelmed by language-specific differences that held for the training language but not the test language. We thus now revisit that earlier experiment, but restrict ourselves to features that distinguish translationese from original English generally. To do this, we use the small development corpus described in Section 2.1. We use Bayesian logistic regression to learn a classifier to distinguish between translationese and original English. We select the 10 highest-weighted function-word markers for T and the 10 highest-weighted function-word markers for O in the development corpus. We then rerun our train-on-source-language-x, test-on-source-language-y experiment using this restricted set as our feature set. We now find that even in the difficult case where we train on Finnish and test on another language (or vice versa), we succeed at distinguishing translationese from original English with accuracy above 80%. This considerably improves the earlier results shown in Table 3. Thus, a bit of feature engineering facilitates learning a good classifier for T vs. O even across source languages. 4 Other Genres and Language Families We have found both general and language-specific differences between translationese and original English in one large corpus. It might be wondered whether the phenomena we have found hold in other genres and for a completely different set of source languages. To test this, we consider a second corpus. 4.1 The IHT corpus Our second corpus includes three translated corpora, each of which is an on-line local supplement to the International Herald Tribune (IHT): Kathimerini (translated from Greek), Ha’aretz (translated from Hebrew), and the JoongAng Daily (translated from Korean). In addition, the corpus includes original English articles from the IHT. Each of the four components contains four different domains balanced roughly equally: news (80,000 words), arts and leisure (50,000), business and finance (50,000), and opinion (50,000) and each covers the period from April-September 2004. Each component consists of about 230,000 tokens. (Unlike for our Europarl corpus, the amount of English text available is not equal to the aggregate of the translated corpora, but rather equal to each of the individual corpora.) It should be noted that the IHT corpus belongs to the writing modality while the Europarl corpus belongs to the speaking modality (although possibly post-edited). Furthermore, the source languages (Hebrew, Greek and Korean) in the IHT corpus are more disparate than those in the Europarl corpus. Our first objective is to confirm that the results we obtained earlier on the Europarl corpus hold for the IHT corpus as well. Perhaps more interestingly, our second objective is to see if the gradability phenomenon observed earlier (Table 3) generalizes to families of languages. Our first hypothesis is that a classifier for identifying translationese that is trained on Europarl will succeed only weakly to identify translationese in IHT. But our second hypothesis is that there are sufficient general properties of translationese that cross language families and genres that a learned classifier can accurately identify translationese even on a test corpus that includes both corpora, spanning eight disparate languages across two distinct genres. 4.2 Results on IHT corpus Running essentially the same experiments as described for the Europarl corpus, we obtain the following results. First of all, we can determine source language with accuracy of 86.5%. This is a somewhat weaker result than the 92.7% result obtained on Europarl, especially considering that there are only three classes instead of five. The difference is most likely due to the fact that the IHT corpus is about half the size of the Europarl corpus. Nevertheless, it is clear that source language strongly affects translationese in this corpus. Second, as can be seen in Table 6, we find that the gradability phenomenon occurs in this corpus as well. Results are strongest when the train and 1323 test corpora involve the same source language and trials involving Korean, the most distant language, are somewhat weaker than those across Greek and Hebrew. Train Gr He Ko Gr 89.8 73.4 64.8 He 82.0 86.3 65.5 Ko 73.0 72.5 85.0 Table 6: Results of learning a T vs. O classifier using one source language and testing it using another source language Third, we find in ten-fold cross-validation experiments that we can distinguish translationese from original English in the IHT corpus with accuracy of 86.3%. Thus, despite the great distance between the three source languages in this corpus, general differences between translationese and original English are sufficient to facilitate reasonably accurate identification of translationese. 4.3 Combining the corpora First, we consider whether a classifier learned on the Europarl corpus can be used to identify translationese in the IHT corpus, and vice versa. It would be consistent with our findings in Section 2.3, that we would achieve better than random results but not high accuracy, since there are no doubt features common to translations from the five European languages of Europarl that are distinct from those of translations from the very different languages in IHT. In fact, we find that training on Europarl and testing on IHT yields accuracy of 64.8%, while training on IHT and testing on Europarl yields accuracy of 58.8%. The weak results reflect both differences between the families of source languages involved in the respective corpora, as well as genre differences. Thus, for example, we find that of the pronouns shown in Table 4 above, only he and his are significantly under-represented in translationese in the IHT corpus. Thus, that effect is specific either to the genre of Europarl or to the European languages considered there. Now, we combine the two corpora and check if we can identify translationese across two genres and eight languages. We run the same experiments as described above, using 200 texts from each of the eight source languages and 1600 non-translated English texts, 1000 from Europarl and 600 from IHT. In 10-fold cross-validation, we find that we can distinguish translationese from non-translated English with accuracy of 90.5%. This shows that there are features of translationese that cross genres and widely disparate languages. Thus, for one prominent example, we find that, as in Europarl, the word the is overrepresented in translationese in IHT (15.36% in T vs. 13.31% in O; significant at p<0.01). In fact, the frequencies across corpora are astonishingly consistent. To further appreciate this point, let‟s look at the frequencies of cohesive adverbs in the IHT corpus. We find essentially, the same pattern in IHT as we did in Europarl. The preponderance of cohesive adverbs are over-represented in translationese, most of them with differences significant at p<0.01. Curiously, the word actually is a counterexample in both corpora. 5 Conclusions We have found that we can learn classifiers that determine source language given a translated text, as well as classifiers that distinguish translated text from non-translated text in the source language. These text categorization experiments suggest that both source language and the mere fact of being word freq O freq T therefore 0.011% 0.031% thus 0.011% 0.027% consequently 0.000% 0.004% hence 0.003% 0.007% accordingly 0.003% 0.003% however 0.078% 0.129% nevertheless 0.008% 0.018% also 0.305% 0.453% furthermore 0.003% 0.011% moreover 0.009% 0.008% indeed 0.018% 0.024% actually 0.032% 0.018% Table 7: Frequency of cohesive adverbs in O and T in the IHT corpus. Bold indicates significance at p<0.01. 1324 translated play a crucial role in the makeup of a translated text. It is important to note that our learned classifiers are based solely on function words, so that, unlike earlier studies, the differences we find are unlikely to include cultural or thematic differences that might be artifacts of corpus construction. In addition, we find that the exploitability of differences between translated texts and nontranslated texts are related to the difference between source languages: translations from similar source languages are different from non-translated texts in similar ways. Linguists use a variety of methods to quantify the extent of differences and similarities between languages. For example, Fusco (1990) studies translations between Spanish and Italian and considers the impact of structural differences between the two languages on translation quality. Studying the differences and distance between languages by comparing translations into the same language may serve as another way to deepen our typological knowledge. As we have seen, training on source language x and testing on source language y provides us with a good estimation of the distance between languages, in accordance with what we find in standard works on typology (cf. Katzner, 2002). In addition to its intrinsic interest, the finding that the distance between languages is directly correlated with our ability to distinguish translations from a given source language from non-translated text is of great importance for several computational tasks. First, translations can be studied in order to shed new light on the differences between languages and can bear on attested techniques for using cognates to improve machine translation (Kondrak & Sherif, 2006). Additionally, given the results of our experiments, it stands to reason that using translated texts, especially from related source languages, will prove beneficial for constructing language models and will outperform results obtained from non-translated texts. This, too, bears on the quality of machine translation. Finally, we find that there are general properties of translationese sufficiently strong that we can identify translationese even in a combined corpus that is comprised of eight very disparate languages across two distinct genres, one spoken and the other written. Prominent among these properties is the word the, as well as a number of cohesive adverbs, each of which is significantly over-represented in translated texts. References Mona Baker. 1993. Corpus linguistics and translation studies: Implications and applications. In Gill Francis Mona Baker and Elena Tognini Bonelli, editors, Text and technology: in honour of John Sinclair, pages 233-252. John Benjamins, Amsterdam. Marco Baroni and Silvia Bernardini. 2006. A new approach to the study of Translationese: Machinelearning the difference between original and translated text. Literary and Linguistic Computing, 21(3):259-274. Shoshan Blum-Kulka. Shifts of cohesion and coherence in translation. 1986. In Juliane House and Shoshana Blum-Kulka (Eds), Interlingual and Intercultural Communication (17-35). Tübingen: Günter Narr Verlag. William Frawley. 1984. Prolegomenon to a theory of translation. In William Frawley (ed), Translation. Literary, Linguistic and Philosophical Perspectives (179-175). Newark: University of Delaware Press. Maria Antonietta Fusco. 1990. Quality in conference interpreting between cognate languages: A preliminary approach to the Spanish-Italian case. The Interpreters’ Newsletter, 3, 93-97. Martin Gellerstam. 1986. Translationese in Swedish novels translated from English, in Lars Wollin & Hans Lindquist (eds.), Translation Studies in Scandinavia (88-95). Lund: CWK Gleerup. Iustina Ilisei, Diana Inkpen, Gloria Corpas Pastor, and Ruslan Mitkov. Identification of translationese: A machine learning approach. In Alexander F. Gelbukh, editor, Proceedings of CICLing-2010: Computational Linguistics and Intelligent Text Processing, 11th International, volume 6008 of Lecture Notes in Computer Science, pages 503-511. Springer, 2010. Kenneth Katzner. 2002. The Languages of the World. Routledge. Grzegorz Kondrak and Tarek Sherif. 2006. Evaluation of several phonetic similarity algorithms on the task of cognate identification. In Proceedings of the Workshop on Linguistic Distances (LD '06). 43-50. David Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic detection of translated text and its impact on machine translation. In Proceedings of MT-Summit XII. Sara Laviosa: 1997. How Comparable can 'Comparable Corpora' Be?. Target, 9 (2), pp. 289-319. 1325 Sara Laviosa-Braithwaite. 1998. In Mona Baker (ed.) Routledge Encyclopedia of Translation Studies. London/New York: Routledge, pp.288-291. Sara Laviosa. 2002. Corpus-based Translation Studies. Theory, Findings, Applications. Amsterdam/New York: Rodopi. David Madigan, Alexander Genkin, David D. Lewis and Dmitriy Fradkin 2005. Bayesian Multinomial Logistic Regression for Author Identification, In Maxent Conference, 509-516. James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count (LIWC): LIWC2001 Manual. Erlbaum Publishers, Mahwah, NJ, USA. Helmut Schmid. Probabilistic Part-of-Speech Tagging Using Decision Trees. 2004. In Proceedings of International Conference on New Methods in Language Processing. Larry Selinker.1972. Interlanguage. International Review of Applied Linguistics. 10, 209-241. Gideon Toury. 1995. Descriptive Translation Studies and beyond. John Benjamins, Amsterdam / Philadelphia. Hans van Halteren. 2008. Source language markers in EUROPARL translations. In COLING '08: Proceedings of the 22nd International Conference on Computational Linguistics, pages 937-944. 1326
2011
132
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1327–1335, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Rare Word Translation Extraction from Aligned Comparable Documents Emmanuel Prochasson and Pascale Fung Human Language Technology Center Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong {eemmanuel,pascale}@ust.hk Abstract We present a first known result of high precision rare word bilingual extraction from comparable corpora, using aligned comparable documents and supervised classification. We incorporate two features, a context-vector similarity and a co-occurrence model between words in aligned documents in a machine learning approach. We test our hypothesis on different pairs of languages and corpora. We obtain very high F-Measure between 80% and 98% for recognizing and extracting correct translations for rare terms (from 1 to 5 occurrences). Moreover, we show that our system can be trained on a pair of languages and test on a different pair of languages, obtaining a F-Measure of 77% for the classification of Chinese-English translations using a training corpus of Spanish-French. Our method is therefore even potentially applicable to low resources languages without training data. 1 Introduction Rare words have long been a challenge to translate automatically using statistical methods due to their low occurrences. However, the Zipf’s Law claims that, for any corpus of natural language text, the frequency of a word wn (n being its rank in the frequency table) will be roughly twice as high as the frequency of word wn+1. The logical consequence is that in any corpus, there are very few frequent words and many rare words. We propose a novel approach to extract rare word translations from comparable corpora, relying on two main features. The first feature is the context-vector similarity (Fung, 2000; Chiao and Zweigenbaum, 2002; Laroche and Langlais, 2010): each word is characterized by its context in both source and target corpora, words in translation should have similar context in both languages. The second feature follows the assumption that specific terms and their translations should appear together often in documents on the same topic, and rarely in non-related documents. This is the general assumption behind early work on bilingual lexicon extraction from parallel documents using sentence boundary as the context window size for cooccurrence computation, we suggest to extend it to aligned comparable documents using document as the context window. This document context is too large for co-occurrence computation of functional words or high frequency content words, but we show through observations and experiments that this window size is appropriate for rare words. Both these features are unreliable when the number of occurrences of words are low. We suggest however that they are complementary and can be used together in a machine learning approach. Moreover, we suggest that the model trained for one pair of languages can be successfully applied to extract translations from another pair of languages. This paper is organized as follows. In the next section, we discuss the challenge of rare lexicon extraction, explaining the reasons why classic approaches on comparable corpora fail at dealing with rare words. We then discuss in section 3 the concept of aligned comparable documents and how we exploited those documents for bilingual lexicon extraction in section 4. We present our resources and implementation in section 5 then carry out and comment several experiments in section 6. 1327 2 The challenge of rare lexicon extraction There are few previous works focusing on the extraction of rare word translations, especially from comparable corpora. One of the earliest works is from (Pekar et al., 2006). They emphasized the fact that the context-vector based approach, used for processing comparable corpora, perform quite unreliably on all but the most frequent words. In a nutshell1, this approach proceeds by gathering the context of words in source and target languages inside context-vectors, then compares source and target context-vectors using similarity measures. In a monolingual context, such an approach is used to automatically get synonymy relationship between words to build thesaurus (Grefenstette, 1994). In the multilingual case, it is used to extract translations, that is, pairs of words with the same meaning in source and target corpora. It relies on the Firthien hypothesis that you shall know a word by the company it keeps (Firth, 1957). To show that the frequency of a word influences its alignment, (Pekar et al., 2006) used six pairs of comparable corpora, ranking translations according to their frequencies. The less frequent words are ranked around 100-160 by their algorithm, while the most frequent ones typically appear at rank 20-40. We ran a similar experiment using a FrenchEnglish comparable corpus containing medical documents, all related to the topic of breast cancer, all manually classified as scientific discourse. The French part contains about 530,000 words while the English part contains about 7.4 millions words. For this experiment though, we sampled the English part to obtain a 530,000-words large corpus, matching the size of the French part. Using an implementation of the context-vector similarity, we show in figure 1 that frequent words (above 400 occurrences in the corpus) reach a 60% precision whereas rare words (below 15 occurrences) are correctly aligned in only 5% of the time. These results can be explained by the fact that, for the vector comparison to be efficient, the information they store has to be relevant and discriminatory. If there are not enough occurrences of a word, it is 1Detailed presentations can be found for example in (Fung, 2000; Chiao and Zweigenbaum, 2002; Laroche and Langlais, 2010). Figure 1: Results for context-vector based translations extraction with respect to word frequency. The vertical axis is the amount of correct translations found for Top1, and the horizontal axis is the word occurrences in the corpus. impossible to get a precise description of the typical context of this word, and therefore its description is likely to be very different for source and target words in translation. We confirmed this result with another observation on the full English part of the previous corpus, randomly split in 14 samples of the same size. The context-vectors for very frequent words, such as cancer (between 3,000 and 4,000 occurrences in each sample) are very similar across the subsets. Less frequent words, such as abnormality (between 70 and 16 occurrences in each sample) have very unstable context-vectors, hence a lower similarity across the subsets. This observation actually indicates that it will be difficult to align abnormality with itself. 3 Aligned comparable documents A pair of aligned comparable documents is a particular case of comparable corpus: two comparable documents share the same topic and domain; they both relate the same information but are not mutual translations; although they might share parallel chunks (Munteanu and Marcu, 2005) – paragraphs, sentences or phrases – in the general case they were written independently. These comparable documents, when concatenated together in order, form an aligned comparable corpus. 1328 Examples of such aligned documents can be found, for example in (Munteanu and Marcu, 2005): they aligned comparable documents with close publication dates. (Tao and Zhai, 2005) used an iterative, bootstrapping approach to align comparable documents using examples of already aligned corpora. (Smith et al., 2010) aligned documents from Wikipedia following the interlingual links provided on articles. We take advantage of this alignment between documents: by looking at what is common between two aligned documents and what is different in other documents, we obtain more precise information about terms than when using a larger comparable corpus without alignment. This is especially interesting in the case of rare lexicon as the classic context-vector similarity is not discriminatory enough and fails at raising interesting translation for rare words. 4 Rare word translations from aligned comparable documents 4.1 Co-occurrence model Different approaches have been proposed for bilingual lexicon extraction from parallel corpora, relying on the assumption that a word has one sense, one translation, no missing translation, and that its translation appears in aligned parallel sentences (Fung, 2000). Therefore, translations can be extracted by comparing the distribution of words across the sentences. For example, (Gale and Church, 1991) used a derivative of the χ2 statistics to evaluate the association between words in aligned region of parallel documents. Such association scores evaluate the strength of the relation between events. In the case of parallel sentences and lexicon extraction, they measure how often two words appear in aligned sentences, and how often one appears without the other. More precisely, they will compare their number of co-occurrences against the expected number of cooccurrences under the null-hypothesis that words are randomly distributed. If they appear together more often than expected, they are considered as associated (Evert, 2008). We focus in this work on rare words, more precisely on specialized terminology. We define them as the set of terms that appear from 1 (hapaxes) to 5 times. We use a strategy similar to the one applied on parallel sentences, but rely on aligned documents. Our hypothesis is very similar: words in translation should appear in aligned comparable documents. We used the Jaccard similarity (eq. 1) to evaluate the association between words among aligned comparable documents. In the general case, this measure would not give relevant scores due to frequency issue: it produces the same scores for two words that appear always together, and never one without the other, disregarding the fact that they appear 500 times or one time only. Other association scores generally rely on occurrence and cooccurrence counts to tackle this issue (such as the log-likelihood, eq. 2). In our case, the number of co-occurrences will be limited by the number of occurrences of the words, from 1 to 5. Therefore, the Jaccard similarity efficiently reflects what we want to observe. J(wi, wj) = |Ai ∩Aj| |Ai ∪Aj|; Ai = {d : wi ∈d} (1) A score of 1 indicates a perfect association (words always appear together, never one without the other), the more one word appears without the other, the lower the score. 4.2 Context-vector similarity We implemented the context-vector similarity in a way similar to (Morin et al., 2007). In all experiments, we used the same set of parameters, as they yielded the best results on our corpora. We built the context-vectors using nouns only as seed lexicon, with a window size of 20. Source context-vectors are translated in the target language using the resources presented in the next section. We used the log-likelihood (Dunning, 1993, eq. 2) for contextvector normalization (O is the observed number of co-occurrence in the corpus, E is the expected number of co-occurrences under the null hypothesis). We used the Cosine similarity (eq. 3) for contextvector comparisons. ll(wi, wj) = 2 X ij OijlogOij Eij (2) Cosine(A, B) = A · B ∥A∥2 + ∥B∥2 −A · B (3) 1329 4.3 Binary classification of rare translations We suggest to incorporate both the context-vector similarity and the co-occurrence features in a machine learning approach. This approach consists of training a classifier on positive examples of translation pairs, and negative examples of non-translations pairs. The trained model (in our case, a decision tree) is then used to tag an unknown pair of words as either ”Translation” or ”Non-Translation”. One potential problem for building the training set, as pointed out for example by (Zhao and Ng, 2007) is this: we have a limited number of positive examples, but a very large amount of nontranslation examples as obviously is the case for rare word translations in any training corpus. Including two many negative examples in the training set would lead the classifier to label every pairs as ”Non-Translation”. To tackle this problem, (Zhao and Ng, 2007) tuned the imbalance of positive/negative ratio by resampling the positive examples in the training set. We chose to reduce the set of negative examples, and found that a ratio of five negative examples to one positive is optimal in our case. A lower ratio improves precision but reduces recall for the ”Translation” class. It is also desirable that the classifier focuses on discriminating between confusing pairs of translations. As most of the negative examples have a null co-occurrence score and a null context-vector similarity, they are excluded from the training set. The negative examples are randomly chosen among those that fulfill the following constraints: • non-null features ; • ratio of number of occurrences between source/target words higher than 0.2 and lower than 5. We use the J48 decision tree algorithm, in the Weka environment (Hall et al., 2009). Features are computed using the Jaccard similarity (section 3) for the co-occurrence model, and the implementation of the context-vector similarity presented in section 4.2. 4.4 Extension to another pair of languages Even though the context vector similarity has been shown to achieve different accuracy depending on the pair of languages involved, the co-occurrence model is totally language independent. In the case of binary classification of translations, the two models are complementary to each other: word pairs with null co-occurrence are not considered by the context model while the context vector model gives more semantic information than the co-occurrence model. For these reasons, we suggest that it is possible to use a decision tree trained on one pair of languages to extract translations from another pair of languages. A similar approach is proposed in (Alfonseca et al., 2008): they present a word decomposition model designed for German language that they successfully applied to other compounding languages. Our approach consists in training a decision tree on a pair of languages and applying this model to the classification of unknown pairs of words in another pair of languages. Such an approach is especially useful for prospecting new translations from less known languages, using a well known language as training. We used the same algorithms and same features as in the previous sections, but used the data computed from one pair of languages as the training set, and the data computed from another pair of languages as the testing set. 5 Experimental setup 5.1 Corpora We built several corpora using two different strategies. The first set was built using Wikipedia and the interlingual links available on articles (that points to another version of the same article in another language). We started from the list of all French articles2 and randomly selected articles that provide a link to Spanish and English versions. We downloaded those, and clean them by removing the wikipedia formatting tags to obtain raw UTF8 texts. Articles were not selected based on their sizes, the vocabulary used, nor a particular topic. We obtained about 20,000 aligned documents for each language. A second set was built using an in-house system 2Available on http://download.wikimedia.org/. 1330 [WP] French [WP] English [WP] Es [CLIR] En [CLIR] Zh #documents 20,169 20,169 20,169 15,3247 15,3247 #tokens 4,008,284 5,470,661 2,741,789 1,334,071 1,228,330 #unique tokens 120,238 128,831 103,398 30,984 60,015 Table 1: Statistics for all parts of all corpora. (unpublished) that seeks for comparable and parallel documents from the web. Starting from a list of Chinese documents (in this case, mostly news articles), we automatically selected English target documents using Cross Language Information Retrieval. About 85% of the paired documents obtained are direct translations (header/footer of web pages apart). However, they will be processed just like aligned comparable documents, that is, we will not take advantage of the structure of the parallel contents to improve accuracy, but will use the exact same approach that we applied for the Wikipedia documents. We gathered about 15,000 pairs of documents employing this method. All corpora were processed using Tree-Tagger3 for segmentation and Part-of-Speech tagging. We focused on nouns only and discarded all other tokens. We would record the lemmatized form of tokens when available, otherwise we would record the original form. Table 1 summarizes main statistics for each corpus; [WP] refers to the Wikipedia corpora, [CLIR] to the Chinese-English corpora extracted through cross language information retrieval. 5.2 Dictionaries We need a bilingual seed lexicon for the contextvector similarity. We used a French-English lexicon obtained from the Web. It contains about 67,000 entries. The Spanish-English and SpanishFrench dictionaries were extracted from the linguistic resources of the Apertium project4. We obtained approximately 22,500 Spanish-English translations and 12,000 for Spanish-French. Finally, for Chinese-English we used the LDC2002L27 resource from the Linguistic Data Consortium5 with about 122,000 entries. 3http://www.ims.uni-stuttgart. de/projekte/corplex/TreeTagger/ DecisionTreeTagger.html 4http://www.apertium.org 5http://www.ldc.upenn.edu 5.3 Evaluation lists To evaluate our approach, we needed evaluation lists of terms for which translations are already known. We used the Medical Subject Headlines, from the UMLS meta-thesaurus6 which provides a lexicon of specialized, medical terminology, notably in Spanish, English and French. We used the LDC lexicon presented in the previous section for ChineseEnglish. From these resources, we selected all the source words that appears from 1 to 5 times in the corpora in order to build the evaluation lists. 5.4 Oracle translations We looked at the corpora to evaluate how many translation pairs from the evaluation lists can be found across the aligned comparable documents. Those translations are hereafter the oracle translations. For French/English, French/Spanish and English/Spanish, about 60% of the translation pairs can be found. For Chinese/English, this ratio reaches 45%. The main reason for this lower result is the inaccuracy of the segmentation tool used to process Chinese. Segmentation tools usually rely on a training corpus and typically fail at handling rare words which, by definition, were unlikely to be found in the training examples. Therefore, some rare Chinese tokens found in our corpus are the results of faulty segmentation, and the translation of those faulty words can not be found in related documents. We encountered the same issue but at a much lower degree for other languages because of spelling mistakes and/or improper Part-of-Speech tagging. 6 Experiments We ran three different experiments. Experiment I compares the accuracy of the context-vector similarity and the co-occurrence model. Experiment II uses supervised classification with both features. 6http://www.nlm.nih.gov/research/umls/ 1331 Figure 2: Experiment I: comparison of accuracy obtained for the Top10 with the context-vector similarity and the co-occurrence model, for hapaxes (left) and words that appear 2 to 5 times (right). Experiment III extracts translation from a pair of languages, using a classifier trained on another pair of languages. 6.1 Experiment I: co-occurrence model vs. context-vector similarity We split the French-English part of the Wikipedia corpus into different samples: the first sample contains 500 pairs of documents. We then aggregated more documents to this initial sample to test different sizes of corpora. We built the sample in order to ensure hapaxes in the whole corpus are hapaxes in all subsets. That is, we ensured the 431 hapaxes in the evaluation lists are represented in the 500 documents subset. We extracted translations in two different ways: 1. using the co-occurrence model; 2. using the context-vector based approach, with the same evaluation lists. The accuracy is computed on 1,000 pairs of translations from the set of oracle translations, and measures the amount of correct translations found for the 10 best ranks (Top10) after ranking the candidates according to their score (context-vector similarity or co-occurrence model). The results are presented in figure 2. We can draw two conclusions out of these results. First, the size of the corpus influences the quality of the bilingual lexicon extraction when using the co-occurrence model. This is especially interesting with hapaxes, for which frequency does not change with the increase of the size of the corpora. The accuracy is improved by adding more information to the corpus, even if this additional information does not cover the pairs of translations we are looking for. The added documents will weaken the association of incorrect translations, without changing the association for rare terms translations. For example, the precision for hapaxes using the co-occurrence model ranges from less than 1% when using only 500 pairs of documents, to about 13% when using all documents. The second conclusion is that the co-occurrence model outperforms the context-vector similarity. However, both these approaches still perform poorly. In the next experiment, we propose to combine them using supervised classification. 6.2 Experiment II: binary classification of translation For each corpus or combination of corpora – English-Spanish, English-French, Spanish-French and Chinese-English, we ran three experiments, using the following features for supervised learning of translations: • the context-vector similarity; • the co-occurrence model; • both features together. The parameters are discussed in section 4.3. We used all the oracle translations to train the positive values. Results are presented in table 2, they are computed using a 10-folds cross validation. Class T refers to ”Translation”, ¬T to ”Non-Translation”. The evaluation of precision/recall/F-Measure for the class ”Translation” are given in equation 4 to 6. 1332 Precision Recall F-Measure Cl. English-Spanish context0.0% 0.0% 0.0% T vectors 83.3% 99.9% 90.8% ¬T co-occ. 66.2% 44.2% 53.0% T model 89.5% 95.5% 92.4% ¬T both 98.6% 88.6% 93.4% T 97.8% 99.8% 98.7% ¬T French-English context76.5% 10.3% 18.1% T vectors 90.9% 99.6% 95.1% ¬T co-occ. 85.7% 1.2% 2.4% T model 90.1% 100% 94.8% ¬T both 81.0% 80.2% 80.6% T 94.9% 98.7% 96.8% ¬T French-Spanish context0.0% 0.0% 0.0% T vectors 81.0% 100% 89.5% ¬T co-occ. 64.2% 46.5% 53.9% T model 88.2% 93.9% 91.0% ¬T both 98.7% 94.6% 96.7% T 98.8% 99.7% 99.2% ¬T Chinese-English context69.6% 13.3% 22.3% T vectors 91.0% 93.1% 92.1% ¬T co-occ. 73.8% 32.5% 45.1% T model 85.2% 97.1% 90.8% ¬T both 86.7% 74.7% 80.3% T 96.3% 98.3% 97.3% ¬T Table 2: Experiment II: results of binary classification for ”Translation” and ”Non-Translation”. precisionT = |T ∩oracle| |T| (4) recallT = |T ∩oracle| |oracle| (5) FMeasure = 2 × precision × recall precision + recall (6) These results show first that one feature is generally not discriminatory enough to discern correct translation and non-translation pairs. For example with Spanish-English, by using context-vector similarity only, we obtained very high recall/precision for the classification of ”Non-Translation”, but null precision/recall for the classification of ”Translation”. In some other cases, we obtained high precision but poor recall with one feature only, which is not a usefully result as well since most of the correct translations are still labeled as ”Non-Translation”. However, when using both features, the precision is strongly improved up to 98% (English-Spanish or French-Spanish) with a high recall of about 90% for class T. We also achieved about 86%/75% precision/recall in the case of Chinese-English, even though they are very distant languages. This last result is also very promising since it has been obtained from a fully automatically built corpus. Table 3 shows some examples of correctly labeled ”Translation”. The decision trees obtained indicate that, in general, word pairs with very high co-occurrence model scores are translations, and that the context-vector similarity disambiguate candidates with lower cooccurrence model scores. Interestingly, the trained decision trees are very similar between the different pairs of languages, which inspired the next experiment. 6.3 Experiment III: extension to another pair of languages In the last experiment, we focused on using the knowledge acquired with a given pair of languages to recognize proper translation pairs using a different pair of languages. For this experiment, we used the data from one corpus to train the classifier, and used the data from another combination of languages as the test set. Results are displayed in table 4. These last results are of great interest because they show that translation pairs can be correctly classified even with a classifier trained on another pair of languages. This is very promising because it allows one to prospect new languages using knowledge acquired on a known pairs of languages. As an example, we reached a 77% F-Measure for Chinese-English alignment using a classifier trained on Spanish-French features. This not only confirms the precision/recall of our approach in general, but also shows that the model obtained by training tends to be very stable and accurate across different pairs of languages and different corpora. 1333 Tested with Trained with Sp-En Sp-Fr Fr-En Zh-En Sp-En 98.6/88.8/93.5 98.7/94.9/96.8 91.5/48.3/63.2 99.3/63.0/77.1 Sp-Fr 89.5/77.9/83.9 90.4/82.9/86.5 75.4/53.5/62.6 98.7/63.3/77.1 Fr-En 89.5/77.9/83.9 90.4/82.9/86.5 85.2/80.0/82.6 81.0/87.6/84.2 Zh-En 96.6/89.2/92.7 97.7/94.9/96.3 81.1/50.9/62.5 97.4/65.1/78.1 Table 4: Experiment III: Precision/Recall/F-Measure for label ”Translation”, obtained for all training/testing set combinations. English French myometrium myom`etre lysergide lysergide hyoscyamus jusquiame lysichiton lysichiton brassicaceae brassicac´ees yarrow achill´ee spikemoss s´elaginelle leiomyoma fibromyome ryegrass ivraie English Spanish spirometry espirometr´ıa lolium lolium omentum epipl´on pilocarpine pilocarpina chickenpox varicela bruxism bruxismo psittaciformes psittaciformes commodification mercantilizaci´on talus astr´agalo English Chinese hooliganism 流氓 kindergarten 幼儿园 oyster 牡蛎 fascism 法西斯主义 taxonomy 分类学 mongolian 蒙古人 subpoena 传票 rupee 卢比 archbishop 大主教 serfdom 农奴 typhoid 伤寒 Table 3: Experiment II and III: examples of rare word translations found by our algorithm. Note that even though some words such as ”kindergarten” are not rare in general, they occur with very low frequency in the test corpus. 7 Conclusion We presented a new approach for extracting translations of rare words among aligned comparable documents. To the best of our knowledge, this is one of the first high accuracy extraction of rare lexicon from non-parallel documents. We obtained a FMeasure ranging from about 80% (French-English, Chinese-English) to 97% (French-Spanish). We also obtained good results for extracting lexicon for a pair of languages, using a decision tree trained with the data computed on another pair of languages. We yielded a 77% F-Measure for the extraction of Chinese-English lexicon, using Spanish-French for training the model. On top of these promising results, our approach presents several other advantages. First, we showed that it works well on automatically built corpora which require minimal human intervention. Aligned comparable documents can easily be collected and are available in large volumes. Moreover, the proposed machine learning method incorporating both context-vector and co-occurrence model has shown to give good results on pairs of languages that are very different from each other, such as ChineseEnglish. It is also applicable across different training and testing language pairs, making it possible for us to find rare word translations even for languages without training data. The co-occurrence model is completely language independent and have been shown to give good results on various pairs of languages, including Chinese-English. Acknowledgments The authors would like to thank Emmanuel Morin (LINA CNRS 6241) for providing us the comparable corpus used for the experiment in section 2, Simon Shi for extracting and providing the corpus 1334 described in section 5.1, and the anonymous reviewers for their valuable comments. This research is partly supported by ITS/189/09 AND BBNX0220F00310/11PN. References Enrique Alfonseca, Slaven Bilac, and Stefan Pharies. 2008. Decompounding query keywords from compounding languages. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL’08), pages 253–256. Yun-Chuang Chiao and Pierre Zweigenbaum. 2002. Looking for candidate translational equivalents in specialized, comparable corpora. In Proceedings of the 19th International Conference on Computational Linguistics (COLING’02), pages 1208–1212. Ted Dunning. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics, 19(1):61–74. Stefan Evert. 2008. Corpora and collocations. In A. Ludeling and M. Kyto, editors, Corpus Linguistics. An International Handbook, chapter 58. Mouton de Gruyter, Berlin. John Firth. 1957. A synopsis of linguistic theory 19301955. Studies in Linguistic Analysis, Philological. Longman. Pascale Fung. 2000. A statistical view on bilingual lexicon extraction–from parallel corpora to non-parallel corpora. In Jean V´eronis, editor, Parallel Text Processing, page 428. Kluwer Academic Publishers. William A. Gale and Kenneth W. Church. 1991. Identifying word correspondence in parallel texts. In Proceedings of the workshop on Speech and Natural Language, HLT’91, pages 152–157, Morristown, NJ, USA. Association for Computational Linguistics. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publisher. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: An update. SIGKDD Explorations, 11. Audrey Laroche and Philippe Langlais. 2010. Revisiting context-based projection methods for term-translation spotting in comparable corpora. In 23rd International Conference on Computational Linguistics (Coling 2010), pages 617–625, Beijing, China, Aug. Emmanuel Morin, B´eatrice Daille, Koichi Takeuchi, and Kyo Kageura. 2007. Bilingual Terminology Mining – Using Brain, not brawn comparable corpora. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL’07), pages 664– 671, Prague, Czech Republic. Dragos Stefan Munteanu and Daniel Marcu. 2005. Improving Machine Translation Performance by Exploiting Non-Parallel Corpora. Computational Linguistics, 31(4):477–504. Viktor Pekar, Ruslan Mitkov, Dimitar Blagoev, and Andrea Mulloni. 2006. Finding translations for lowfrequency words in comparable corpora. Machine Translation, 20(4):247–266. Jason R. Smith, Chris Quirk, and Kristina Toutanova. 2010. Extracting parallel sentences from comparable corpora using document level alignment. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, pages 403– 411. Tao Tao and ChengXiang Zhai. 2005. Mining comparable bilingual text corpora for cross-language information integration. In KDD ’05: Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pages 691–696, New York, NY, USA. ACM. Shanheng Zhao and Hwee Tou Ng. 2007. Identification and resolution of Chinese zero pronouns: A machine learning approach. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), Prague, Czech Republic. 1335
2011
133
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1336–1345, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Using Bilingual Parallel Corpora for Cross-Lingual Textual Entailment Yashar Mehdad FBK - irst and Uni. of Trento Povo (Trento), Italy [email protected] Matteo Negri FBK - irst Povo (Trento), Italy [email protected] Marcello Federico FBK - irst Povo (Trento), Italy [email protected] Abstract This paper explores the use of bilingual parallel corpora as a source of lexical knowledge for cross-lingual textual entailment. We claim that, in spite of the inherent difficulties of the task, phrase tables extracted from parallel data allow to capture both lexical relations between single words, and contextual information useful for inference. We experiment with a phrasal matching method in order to: i) build a system portable across languages, and ii) evaluate the contribution of lexical knowledge in isolation, without interaction with other inference mechanisms. Results achieved on an English-Spanish corpus obtained from the RTE3 dataset support our claim, with an overall accuracy above average scores reported by RTE participants on monolingual data. Finally, we show that using parallel corpora to extract paraphrase tables reveals their potential also in the monolingual setting, improving the results achieved with other sources of lexical knowledge. 1 Introduction Cross-lingual Textual Entailment (CLTE) has been proposed by (Mehdad et al., 2010) as an extension of Textual Entailment (Dagan and Glickman, 2004) that consists in deciding, given two texts T and H in different languages, if the meaning of H can be inferred from the meaning of T. The task is inherently difficult, as it adds issues related to the multilingual dimension to the complexity of semantic inference at the textual level. For instance, the reliance of current monolingual TE systems on lexical resources (e.g. WordNet, VerbOcean, FrameNet) and deep processing components (e.g. syntactic and semantic parsers, co-reference resolution tools, temporal expressions recognizers and normalizers) has to confront, at the cross-lingual level, with the limited availability of lexical/semantic resources covering multiple languages, the limited coverage of the existing ones, and the burden of integrating languagespecific components into the same cross-lingual architecture. As a first step to overcome these problems, (Mehdad et al., 2010) proposes a “basic solution”, that brings CLTE back to the monolingual scenario by translating H into the language of T. Despite the advantages in terms of modularity and portability of the architecture, and the promising experimental results, this approach suffers from one main limitation which motivates the investigation on alternative solutions. Decoupling machine translation (MT) and TE, in fact, ties CLTE performance to the availability of MT components, and to the quality of the translations. As a consequence, on one side translation errors propagate to the TE engine hampering the entailment decision process. On the other side such unpredictable errors reduce the possibility to control the behaviour of the engine, and devise adhoc solutions to specific entailment problems. This paper investigates the idea, still unexplored, of a tighter integration of MT and TE algorithms and techniques. Our aim is to embed cross-lingual processing techniques inside the TE recognition process in order to avoid any dependency on external MT components, and eventually gain full control of the system’s behaviour. Along this direction, we 1336 start from the acquisition and use of lexical knowledge, which represents the basic building block of any TE system. Using the basic solution proposed by (Mehdad et al., 2010) as a term of comparison, we experiment with different sources of multilingual lexical knowledge to address the following questions: (1) What is the potential of the existing multilingual lexical resources to approach CLTE? To answer this question we experiment with lexical knowledge extracted from bilingual dictionaries, and from a multilingual lexical database. Such experiments show two main limitations of these resources, namely: i) their limited coverage, and ii) the difficulty to capture contextual information when only associations between single words (or at most named entities and multiword expressions) are used to support inference. (2) Does MT provide useful resources or techniques to overcome the limitations of existing resources? We envisage several directions in which inputs from MT research may enable or improve CLTE. As regards the resources, phrase and paraphrase tables extracted from bilingual parallel corpora can be exploited as an effective way to capture both lexical relations between single words, and contextual information useful for inference. As regards the algorithms, statistical models based on cooccurrence observations, similar to those used in MT to estimate translation probabilities, may contribute to estimate entailment probabilities in CLTE. Focusing on the resources direction, the main contribution of this paper is to show that the lexical knowledge extracted from parallel corpora allows to significantly improve the results achieved with other multilingual resources. (3) In the cross-lingual scenario, can we achieve results comparable to those obtained in monolingual TE? Our experiments show that, although CLTE seems intrinsically more difficult, the results obtained using phrase and paraphrase tables are better than those achieved by average systems on monolingual datasets. We argue that this is due to the fact that parallel corpora are a rich source of crosslingual paraphrases with no equivalents in monolingual TE. (4) Can parallel corpora be useful also for monolingual TE? To answer this question, we experiment on monolingual RTE datasets using paraphrase tables extracted from bilingual parallel corpora. Our results improve those achieved with the most widely used resources in monolingual TE, namely WordNet, Verbocean, and Wikipedia. The remainder of this paper is structured as follows. Section 2 shortly overviews the role of lexical knowledge in textual entailment, highlighting a gap between TE and CLTE in terms of available knowledge sources. Sections 3 and 4 address the first three questions, giving motivations for the use of bilingual parallel corpora in CLTE, and showing the results of our experiments. Section 5 addresses the last question, reporting on our experiments with paraphrase tables extracted from phrase tables on the monolingual RTE datasets. Section 6 concludes the paper, and outlines the directions of our future research. 2 Lexical resources for TE and CLTE All current approaches to monolingual TE, either syntactically oriented (Rus et al., 2005), or applying logical inference (Tatu and Moldovan, 2005), or adopting transformation-based techniques (Kouleykov and Magnini, 2005; Bar-Haim et al., 2008), incorporate different types of lexical knowledge to support textual inference. Such information ranges from i) lexical paraphrases (textual equivalences between terms) to ii) lexical relations preserving entailment between words, and iii) wordlevel similarity/relatedness scores. WordNet, the most widely used resource in TE, provides all the three types of information. Synonymy relations can be used to extract lexical paraphrases indicating that words from the text and the hypothesis entail each other, thus being interchangeable. Hypernymy/hyponymy chains can provide entailmentpreserving relations between concepts, indicating that a word in the hypothesis can be replaced by a word from the text. Paths between concepts and glosses can be used to calculate similarity/relatedness scores between single words, that contribute to the computation of the overall similarity between the text and the hypothesis. Besides WordNet, the RTE literature documents the use of a variety of lexical information sources (Bentivogli et al., 2010; Dagan et al., 2009). These include, just to mention the most popular 1337 ones, DIRT (Lin and Pantel, 2001), VerbOcean (Chklovski and Pantel, 2004), FrameNet (Baker et al., 1998), and Wikipedia (Mehdad et al., 2010; Kouylekov et al., 2009). DIRT is a collection of statistically learned inference rules, that is often integrated as a source of lexical paraphrases and entailment rules. VerbOcean is a graph of fine-grained semantic relations between verbs, which are frequently used as a source of precise entailment rules between predicates. FrameNet is a knowledge-base of frames describing prototypical situations, and the role of the participants they involve. It can be used as an alternative source of entailment rules, or to determine the semantic overlap between texts and hypotheses. Wikipedia is often used to extract probabilistic entailment rules based word similarity/relatedness scores. Despite the consensus on the usefulness of lexical knowledge for textual inference, determining the actual impact of these resources is not straightforward, as they always represent one component in complex architectures that may use them in different ways. As emerges from the ablation tests reported in (Bentivogli et al., 2010), even the most common resources proved to have a positive impact on some systems and a negative impact on others. Some previous works (Bannard and Callison-Burch, 2005; Zhao et al., 2009; Kouylekov et al., 2009) indicate, as main limitations of the mentioned resources, their limited coverage, their low precision, and the fact that they are mostly suitable to capture relations mainly between single words. Addressing CLTE we have to face additional and more problematic issues related to: i) the stronger need of lexical knowledge, and ii) the limited availability of multilingual lexical resources. As regards the first issue, it’s worth noting that in the monolingual scenario simple “bag of words” (or “bag of ngrams”) approaches are per se sufficient to achieve results above baseline. In contrast, their application in the cross-lingual setting is not a viable solution due to the impossibility to perform direct lexical matches between texts and hypotheses in different languages. This situation makes the availability of multilingual lexical knowledge a necessary condition to bridge the language gap. However, with the only exceptions represented by WordNet and Wikipedia, most of the aforementioned resources are available only for English. Multilingual lexical databases aligned with the English WordNet (e.g. MultiWordNet (Pianta et al., 2002)) have been created for several languages, with different degrees of coverage. As an example, the 57,424 synsets of the Spanish section of MultiWordNet aligned to English cover just around 50% of the WordNet’s synsets, thus making the coverage issue even more problematic than for TE. As regards Wikipedia, the crosslingual links between pages in different languages offer a possibility to extract lexical knowledge useful for CLTE. However, due to their relatively small number (especially for some languages), bilingual lexicons extracted from Wikipedia are still inadequate to provide acceptable coverage. In addition, featuring a bias towards named entities, the information acquired through cross-lingual links can at most complement the lexical knowledge extracted from more generic multilingual resources (e.g bilingual dictionaries). 3 Using Parallel Corpora for CLTE Bilingual parallel corpora represent a possible solution to overcome the inadequacy of the existing resources, and to implement a portable approach for CLTE. To this aim, we exploit parallel data to: i) learn alignment criteria between phrasal elements in different languages, ii) use them to automatically extract lexical knowledge in the form of phrase tables, and iii) use the obtained phrase tables to create monolingual paraphrase tables. Given a cross-lingual T/H pair (with the text in l1 and the hypothesis in l2), our approach leverages the vast amount of lexical knowledge provided by phrase and paraphrase tables to map H into T. We perform such mapping with two different methods. The first method uses a single phrase table to directly map phrases extracted from the hypothesis to phrases in the text. In order to improve our system’s generalization capabilities and increase the coverage, the second method combines the phrase table with two monolingual paraphrase tables (one in l1, and one in l2). This allows to: 1. use the paraphrase table in l2 to find paraphrases of phrases extracted from H; 2. map them to entries in the phrase table, and extract their equivalents in l1; 1338 3. use the paraphrase table in l1 to find paraphrases of the extracted fragments in l1; 4. map such paraphrases to phrases in T. With the second method, phrasal matches between the text and the hypothesis are indirectly performed through paraphrases of the phrase table entries. The final entailment decision for a T/H pair is assigned considering a model learned from the similarity scores based on the identified phrasal matches. In particular, “YES” and “NO” judgements are assigned considering the proportion of words in the hypothesis that are found also in the text. This way to approximate entailment reflects the intuition that, as a directional relation between the text and the hypothesis, the full content of H has to be found in T. 3.1 Extracting Phrase and Paraphrase Tables Phrase tables (PHT) contain pairs of corresponding phrases in two languages, together with association probabilities. They are widely used in MT as a way to figure out how to translate input in one language into output in another language (Koehn et al., 2003). There are several methods to build phrase tables. The one adopted in this work consists in learning phrase alignments from a word-aligned bilingual corpus. In order to build English-Spanish phrase tables for our experiments, we used the freely available Europarl V.4, News Commentary and United Nations Spanish-English parallel corpora released for the WMT101. We run TreeTagger (Schmid, 1994) for tokenization, and used the Giza++ (Och and Ney, 2003) to align the tokenized corpora at the word level. Subsequently, we extracted the bilingual phrase table from the aligned corpora using the Moses toolkit (Koehn et al., 2007). Since the resulting phrase table was very large, we eliminated all the entries with identical content in the two languages, and the ones containing phrases longer than 5 words in one of the two sides. In addition, in order to experiment with different phrase tables providing different degrees of coverage and precision, we extracted 7 phrase tables by pruning the initial one on the direct phrase translation probabilities of 0.01, 0.05, 0.1, 0.2, 0.3, 0.4 and 0.5. The resulting 1http://www.statmt.org/wmt10/ phrase tables range from 76 to 48 million entries, with an average of 3.9 words per phrase. Paraphrase tables (PPHT) contain pairs of corresponding phrases in the same language, possibly associated with probabilities. They proved to be useful in a number of NLP applications such as natural language generation (Iordanskaja et al., 1991), multidocument summarization (McKeown et al., 2002), automatic evaluation of MT (Denkowski and Lavie, 2010), and TE (Dinu and Wang, 2009). One of the proposed methods to extract paraphrases relies on a pivot-based approach using phrase alignments in a bilingual parallel corpus (Bannard and Callison-Burch, 2005). With this method, all the different phrases in one language that are aligned with the same phrase in the other language are extracted as paraphrases. After the extraction, pruning techniques (Snover et al., 2009) can be applied to increase the precision of the extracted paraphrases. In our work we used available2 paraphrase databases for English and Spanish which have been extracted using the method previously outlined. Moreover, in order to experiment with different paraphrase sets providing different degrees of coverage and precision, we pruned the main paraphrase table based on the probabilities, associated to its entries, of 0.1, 0.2 and 0.3. The number of phrase pairs extracted varies from 6 million to about 80000, with an average of 3.2 words per phrase. 3.2 Phrasal Matching Method In order to maximize the usage of lexical knowledge, our entailment decision criterion is based on similarity scores calculated with a phrase-to-phrase matching process. A phrase in our approach is an n-gram composed of up to 5 consecutive words, excluding punctuation. Entailment decisions are estimated by combining phrasal matching scores (Scoren) calculated for each level of n-grams , which is the number of 1-grams, 2-grams,..., 5-grams extracted from H that match with n-grams in T. Phrasal matches are performed either at the level of tokens, lemmas, or stems, can be of two types: 2http://www.cs.cmu.edu/ alavie/METEOR 1339 1. Exact: in the case that two phrases are identical at one of the three levels (token, lemma, stem); 2. Lexical: in the case that two different phrases can be mapped through entries of the resources used to bridge T and H (i.e. phrase tables, paraphrases tables, dictionaries or any other source of lexical knowledge). For each phrase in H, we first search for exact matches at the level of token with phrases in T. If no match is found at a token level, the other levels (lemma and stem) are attempted. Then, in case of failure with exact matching, lexical matching is performed at the same three levels. To reduce redundant matches, the lexical matches between pairs of phrases which have already been identified as exact matches are not considered. Once matching for each n-gram level has been concluded, the number of matches (Mn) and the number of phrases in the hypothesis (Nn) are used to estimate the portion of phrases in H that are matched at each level (n). The phrasal matching score for each n-gram level is calculated as follows: Scoren = Mn Nn To combine the phrasal matching scores obtained at each n-gram level, and optimize their relative weights, we trained a Support Vector Machine classifier, SVMlight (Joachims, 1999), using each score as a feature. 4 Experiments on CLTE To address the first two questions outlined in Section 1, we experimented with the phrase matching method previously described, contrasting the effectiveness of lexical information extracted from parallel corpora with the knowledge provided by other resources used in the same way. 4.1 Dataset The dataset used for our experiments is an EnglishSpanish entailment corpus obtained from the original RTE3 dataset by translating the English hypothesis into Spanish. It consists of 1600 pairs derived from the RTE3 development and test sets (800+800). Translations have been generated by the CrowdFlower3 channel to Amazon Mechanical Turk4 (MTurk), adopting the methodology proposed by (Negri and Mehdad, 2010). The method relies on translation-validation cycles, defined as separate jobs routed to MTurk’s workforce. Translation jobs return one Spanish version for each hypothesis. Validation jobs ask multiple workers to check the correctness of each translation using the original English sentence as reference. At each cycle, the translated hypothesis accepted by the majority of trustful validators5 are stored in the CLTE corpus, while wrong translations are sent back to workers in a new translation job. Although the quality of the results is enhanced by the possibility to automatically weed out untrusted workers using gold units, we performed a manual quality check on a subset of the acquired CLTE corpus. The validation, carried out by a Spanish native speaker on 100 randomly selected pairs after two translation-validation cycles, showed the good quality of the collected material, with only 3 minor “errors” consisting in controversial but substantially acceptable translations reflecting regional Spanish variations. The T-H pairs in the collected English-Spanish entailment corpus were annotated using TreeTagger (Schmid, 1994) and the Snowball stemmer6 with token, lemma, and stem information. 4.2 Knowledge sources For comparison with the extracted phrase and paraphrase tables, we use a large bilingual dictionary and MultiWordNet as alternative sources of lexical knowledge. Bilingual dictionaries (DIC) allow for precise mappings between words in H and T. To create a large bilingual English-Spanish dictionary we processed and combined the following dictionaries and bilingual resources: - XDXF Dictionaries7: 22,486 entries. 3http://crowdflower.com/ 4https://www.mturk.com/mturk/ 5Workers’ trustworthiness can be automatically determined by means of hidden gold units randomly inserted into jobs. 6http://snowball.tartarus.org/ 7http://xdxf.revdanica.com/ 1340 Figure 1: Accuracy on CLTE by pruning the phrase table with different thresholds. - Universal dictionary database8: 9,944 entries. - Wiktionary database9: 5,866 entries. - Omegawiki database10: 8,237 entries. - Wikipedia interlanguage links11: 7,425 entries. The resulting dictionary features 53,958 entries, with an average length of 1.2 words. MultiWordNet (MWN) allows to extract mappings between English and Spanish words connected by entailment-preserving semantic relations. The extraction process is dataset-dependent, as it checks for synonymy and hyponymy relations only between terms found in the dataset. The resulting collection of cross-lingual words associations contains 36,794 pairs of lemmas. 4.3 Results and Discussion Our results are calculated over 800 test pairs of our CLTE corpus, after training the SVM classifier over 800 development pairs. This section reports the percentage of correct entailment assignments (accuracy), comparing the use of different sources of lexical knowledge. Initially, in order to find a reasonable trade-off between precision and coverage, we used the 7 phrase tables extracted with different pruning thresholds 8http://www.dicts.info/ 9http://en.wiktionary.org/ 10http://www.omegawiki.org/ 11http://www.wikipedia.org/ MWN DIC PHT PPHT Acc. δ x 55.00 0.00 x 59.88 +4.88 x 62.62 +7.62 x x 62.88 +7.88 Table 1: Accuracy results on CLTE using different lexical resources. (see Section 3.1). Figure 1 shows that with the pruning threshold set to 0.05, we obtain the highest result of 62.62% on the test set. The curve demonstrates that, although with higher pruning thresholds we retain more reliable phrase pairs, their smaller number provides limited coverage leading to lower results. In contrast, the large coverage obtained with the pruning threshold set to 0.01 leads to a slight performance decrease due to probably less precise phrase pairs. Once the threshold has been set, in order to prove the effectiveness of information extracted from bilingual corpora, we conducted a series of experiments using the different resources mentioned in Section 4.2. As it can be observed in Table 1, the highest results are achieved using the phrase table, both alone and in combination with paraphrase tables (62.62% and 62.88% respectively). These results suggest that, with appropriate pruning thresholds, the large number and the longer entries contained in the phrase and paraphrase tables represent an effective way to: i) obtain high coverage, and ii) capture cross-lingual associations between multiple lexical elements. This allows to overcome the bias towards single words featured by dictionaries and lexical databases. As regards the other resources used for comparison, the results show that dictionaries substantially outperform MWN. This can be explained by the low coverage of MWN, whose entries also represent weaker semantic relations (preserving entailment, but with a lower probability to be applied) than the direct translations between terms contained in the dictionary. Overall, our results suggest that the lexical knowledge extracted from parallel data can be successfully used to approach the CLTE task. 1341 Dataset WN VO WIKI PPHT PPHT 0.1 PPHT 0.2 PPHT 0.3 AVG RTE3 61.88 62.00 61.75 62.88 63.38 63.50 63.00 62.37 RTE5 62.17 61.67 60.00 61.33 62.50 62.67 62.33 61.41 RTE3-G 62.62 61.5 60.5 62.88 63.50 62.00 61.5 Table 2: Accuracy results on monolingual RTE using different lexical resources. 5 Using parallel corpora for TE This section addresses the third and the fourth research questions outlined in Section 1. Building on the positive results achieved on the cross-lingual scenario, we investigate the possibility to exploit bilingual parallel corpora in the traditional monolingual scenario. Using the same approach discussed in Section 4, we compare the results achieved with English paraphrase tables with those obtained with other widely used monolingual knowledge resources over two RTE datasets. For the sake of completeness, we report in this section also the results obtained adopting the “basic solution” proposed by (Mehdad et al., 2010). Although it was presented as an approach to CLTE, the proposed method brings the problem back to the monolingual case by translating H into the language of T. The comparison with this method aims at verifying the real potential of parallel corpora against the use of a competitive MT system (Google Translate) in the same scenario. 5.1 Dataset We experiment with the original RTE3 and RTE5 datasets, annotated with token, lemma, and stem information using the TreeTagger and the Snowball stemmer. In addition to confront our method with the solution proposed by (Mehdad et al., 2010) we translated the Spanish hypotheses of our CLTE dataset into English using Google Translate. The resulting dataset was annotated in the same way. 5.2 Knowledge sources We compared the results achieved with paraphrase tables (extracted with different pruning thresholds12) with those obtained using the three most 12We pruned the paraphrase table (PPHT), with probabilities set to 0.1 (PPHT 0.1), 0.2 (PPHT 0.2), and 0.3 (PPHT 0.3) widely used English resources for Textual Entailment (Bentivogli et al., 2010), namely: WordNet (WN). WordNet 3.0 has been used to extract a set of 5396 pairs of words connected by the hyponymy and synonymy relations. VerbOcean (VO). VerbOcean has been used to extract 18232 pairs of verbs connected by the “stronger-than” relation (e.g. “kill” stronger-than “injure”). Wikipedia (WIKI). We performed Latent Semantic Analysis (LSA) over Wikipedia using the jLSI tool (Giuliano, 2007) to measure the relatedness between words in the dataset. Then, we filtered all the pairs with similarity lower than 0.7 as proposed by (Kouylekov et al., 2009). In this way we obtained 13760 word pairs. 5.3 Results and Discussion Table 2 shows the accuracy results calculated over the original RTE3 and RTE5 test sets, training our classifier over the corresponding development sets. The first two rows of the table show that pruned paraphrase tables always outperform the other lexical resources used for comparison, with an accuracy increase up to 3%. In particular, we observe that using 0.2 as a pruning threshold provides a good tradeoff between coverage and precision, leading to our best results on both datasets (63.50% for RTE3, and 62.67% for RTE5). It’s worth noting that these results, compared with the average scores reported by participants in the two editions of the RTE Challenge (AVG column), represent an accuracy improvement of more than 1%. Overall, these results confirm our claim that increasing the coverage using context sensitive phrase pairs obtained from large parallel corpora, results in better performance not only in CLTE, 1342 but also in the monolingual scenario. The comparison with the results achieved on monolingual data obtained by automatically translating the Spanish hypotheses (RTE3-G row in Table 2) leads to four main observations. First, we notice that dealing with MT-derived inputs, the optimal pruning threshold changes from 0.2 to 0.1, leading to the highest accuracy of 63.50%. This suggests that the noise introduced by incorrect translations can be tackled by increasing the coverage of the paraphrase table. Second, in line with the findings of (Mehdad et al., 2010), the results obtained over the MT-derived corpus are equal to those we achieve over the original RTE3 dataset (i.e. 63.50%). Third, the accuracy obtained over the CLTE corpus using combined phrase and paraphrase tables (62.88%, as reported in Table 1) is comparable to the best result gained over the automatically translated dataset (63.50%). In all the other cases, the use of phrase and paraphrase tables on CLTE data outperforms the results achieved on the same data after translation. Finally, it’s worth remarking that applying our phrase matching method on the translated dataset without any additional source of knowledge would result in an overall accuracy of 62.12%, which is lower than the result obtained using only phrase tables on cross-lingual data (62.62%). This demonstrates that phrase tables can successfully replace MT systems in the CLTE task. In light of this, we suggest that extracting lexical knowledge from parallel corpora is a preferable solution to approach CLTE. One of the main reasons is that placing a black-box MT system at the front-end of the entailment process reduces the possibility to cope with wrong translations. Furthermore, the access to MT components is not easy (e.g. Google Translate limits the number and the size of queries, while open source MT tools cover few language pairs). Moreover, the task of developing a full-fledged MT system often requires the availability of parallel corpora, and is much more complex than extracting lexical knowledge from them. 6 Conclusion and Future Work In this paper we approached the cross-lingual Textual Entailment task focusing on the role of lexical knowledge extracted from bilingual parallel corpora. One of the main difficulties in CLTE raises from the lack of adequate knowledge resources to bridge the lexical gap between texts and hypotheses in different languages. Our approach builds on the intuition that the vast amount of knowledge that can be extracted from parallel data (in the form of phrase and paraphrase tables) offers a possible solution to the problem. To check the validity of our assumptions we carried out several experiments on an English-Spanish corpus derived from the RTE3 dataset, using phrasal matches as a criterion to approximate entailment. Our results show that phrase and paraphrase tables allow to: i) outperform the results achieved with the few multilingual lexical resources available, and ii) reach performance levels above the average scores obtained by participants in the monolingual RTE3 challenge. These improvements can be explained by the fact that the lexical knowledge extracted from parallel data provides good coverage both at the level of single words, and at the level of phrases. As a further contribution, we explored the application of paraphrase tables extracted from parallel data in the traditional monolingual scenario. Contrasting results with those obtained with the most widely used resources in TE, we demonstrated the effectiveness of paraphrase tables as a mean to overcome the bias towards single words featured by the existing resources. Our future work will address both the extraction of lexical information from bilingual parallel corpora, and its use for TE and CLTE. On one side, we plan to explore alternative ways to build phrase and paraphrase tables. One possible direction is to consider linguistically motivated approaches, such as the extraction of syntactic phrase tables as proposed by (Yamada and Knight, 2001). Another interesting direction is to investigate the potential of paraphrase patterns (i.e. patterns including partof-speech slots), extracted from bilingual parallel corpora with the method proposed by (Zhao et al., 2009). On the other side we will investigate more sophisticated methods to exploit the acquired lexical knowledge. As a first step, the probability scores assigned to phrasal entries will be considered to perform weighted phrase matching as an improved criterion to approximate entailment. 1343 Acknowledgments This work has been partially supported by the ECfunded project CoSyne (FP7-ICT-4-24853). References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. Proceedings of COLING-ACL. Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with Bilingual Parallel Corpora. Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005). Roy Bar-haim , Jonathan Berant , Ido Dagan , Iddo Greental , Shachar Mirkin , Eyal Shnarch , and Idan Szpektor. 2008. Efficient semantic deduction and approximate matching over compact parse forests. Proceedings of the TAC 2008 Workshop on Textual Entailment. Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa Trang Dang, and Danilo Giampiccolo. 2010. The Sixth PASCAL Recognizing Textual Entailment Challenge. Proceedings of the the Text Analysis Conference (TAC 2010). Timothy Chklovski and Patrick Pantel. 2004. Verbocean: Mining the web for fine-grained semantic verb relations. Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-04). Ido Dagan and Oren Glickman. 2004. Probabilistic textual entailment: Generic applied modeling of language variability. Proceedings of the PASCAL Workshop of Learning Methods for Text Understanding and Mining. Ido Dagan, Bill Dolan, Bernardo Magnini, and Dan Roth. 2009. Recognizing textual entailment: Rational, evaluation and approaches. Journal of Natural Language Engineering , Volume 15, Special Issue 04, pp i-xvii. Michael Denkowski and Alon Lavie. 2010. Extending the METEOR Machine Translation Evaluation Metric to the Phrase Level. Proceedings of Human Language Technologies (HLT-NAACL 2010). Georgiana Dinu and Rui Wang. 2009. Inference Rules and their Application to Recognizing Textual Entailment. Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009). Claudio Giuliano. 2007. jLSI a tool for latent semantic indexing. Software available at http://tcc.itc.it/research/textec/toolsresources/jLSI.html. Lidija Iordanskaja, Richard Kittredge, and Alain Polg re.. 1991. Lexical selection and paraphrase in a meaning text generation model. Natural Language Generation in Articial Intelligence and Computational Linguistics. Thorsten Joachims. 1999. Making large-scale support vector machine learning practical. Philipp Koehn, Franz Josef Och, and Daniel Marcu 2003. Statistical Phrase-Based Translation. Proceedings of HLT/NAACL. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. Proceedings of the Conference of the Association for Computational Linguistics (ACL). Milen Kouleykov and Bernardo Magnini. 2005. Tree edit distance for textual entailment. Proceedings of RALNP-2005, International Conference on Recent Advances in Natural Language Processing. Milen Kouylekov, Yashar Mehdad, and Matteo Negri. 2010. Mining Wikipedia for Large-Scale Repositories of Context-Sensitive Entailment Rules. Proceedings of the Language Resources and Evaluation Conference (LREC 2010). Yashar Mehdad, Alessandro Moschitti and Fabio Massimo Zanzotto. 2010. Syntactic/semantic structures for textual entailment recognition. Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2010). Dekang Lin and Patrick Pantel. 2001. DIRT - Discovery of Inference Rules from Text.. Proceedings of ACM Conference on Knowledge Discovery and Data Mining (KDD-01). Kathleen R. McKeown, Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and summarizing news on a daily basis with Columbias Newsblaster. Proceedings of the Human Language Technology Conference.. Yashar Mehdad, Matteo Negri, and Marcello Federico. 2010. Towards Cross-Lingual Textual Entailment. Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2010). Dan Moldovan and Adrian Novischi. 2002. Lexical chains for question answering. Proceedings of COLING. Matteo Negri and Yashar Mehdad. 2010. Creating a Bilingual Entailment Corpus through Translations with Mechanical Turk: $100 for a 10-day Rush. Proceedings of the NAACL 2010 Workshop on Creating Speech and Language Data With Amazons Mechanical Turk . Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):1951. 1344 Emanuele Pianta, Luisa Bentivogli, and Christian Girardi. 2002. MultiWordNet: Developing and Aligned Multilingual Database. Proceedings of the First International Conference on Global WordNet. Vasile Rus, Art Graesser, and Kirtan Desai 2005. Lexico-Syntactic Subsumption for Textual Entailment. Proceedings of RANLP 2005. Helmut Schmid 2005. Probabilistic Part-of-Speech Tagging Using Decision Trees. Proceedings of the International Conference on New Methods in Language Processing. Marta Tatu andDan Moldovan. 2005. A semantic approach to recognizing textual entailment. Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005). Matthew Snover, Nitin Madnani, Bonnie Dorr, and Richard Schwartz. 2009. Fluency, Adequacy, or HTER? Exploring Different Human Judgments with a Tunable MT Metric. Proceedings of WMT09. Rui Wang and Yi Zhang,. 2009. Recognizing Textual Relatedness with Predicate-Argument Structures. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2009). Kenji Yamada and Kevin Knight 2001. A Syntax-Based Statistical Translation Model. Proceedings of the Conference of the Association for Computational Linguistics (ACL). Shiqi Zhao, Haifeng Wang, Ting Liu, and Sheng Li. 2009. Extracting Paraphrase Patterns from Bilingual Parallel Corpora. Journal of Natural Language Engineering , Volume 15, Special Issue 04, pp 503-526. 1345
2011
134
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1346–1355, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Using Large Monolingual and Bilingual Corpora to Improve Coordination Disambiguation Shane Bergsma, David Yarowsky, Kenneth Church Deptartment of Computer Science and Human Language Technology Center of Excellence Johns Hopkins University [email protected], [email protected], [email protected] Abstract Resolving coordination ambiguity is a classic hard problem. This paper looks at coordination disambiguation in complex noun phrases (NPs). Parsers trained on the Penn Treebank are reporting impressive numbers these days, but they don’t do very well on this problem (79%). We explore systems trained using three types of corpora: (1) annotated (e.g. the Penn Treebank), (2) bitexts (e.g. Europarl), and (3) unannotated monolingual (e.g. Google N-grams). Size matters: (1) is a million words, (2) is potentially billions of words and (3) is potentially trillions of words. The unannotated monolingual data is helpful when the ambiguity can be resolved through associations among the lexical items. The bilingual data is helpful when the ambiguity can be resolved by the order of words in the translation. We train separate classifiers with monolingual and bilingual features and iteratively improve them via co-training. The co-trained classifier achieves close to 96% accuracy on Treebank data and makes 20% fewer errors than a supervised system trained with Treebank annotations. 1 Introduction Determining which words are being linked by a coordinating conjunction is a classic hard problem. Consider the pair: +ellipsis rocket\w1 and mortar\w2 attacks\h −ellipsis asbestos\w1 and polyvinyl\w2 chloride\h +ellipsis is about both rocket attacks and mortar attacks, unlike −ellipsis which is not about asbestos chloride. We use h to refer to the head of the phrase, and w1 and w2 to refer to the other two lexical items. Natural Language Processing applications need to recognize NP ellipsis in order to make sense of new sentences. For example, if an Internet search engine is given the phrase rocket attacks as a query, it should rank documents containing rocket and mortar attacks highly, even though rocket and attacks are not contiguous in the document. Furthermore, NPs with ellipsis often require a distinct type of reordering when translated into a foreign language. Since coordination is both complex and productive, parsers and machine translation (MT) systems cannot simply memorize the analysis of coordinate phrases from training text. We propose an approach to recognizing ellipsis that could benefit both MT and other NLP technology that relies on shallow or deep syntactic analysis. While the general case of coordination is quite complicated, we focus on the special case of complex NPs. Errors in NP coordination typically account for the majority of parser coordination errors (Hogan, 2007). The information needed to resolve coordinate NP ambiguity cannot be derived from hand-annotated data, and we follow previous work in looking for new information sources to apply to this problem (Resnik, 1999; Nakov and Hearst, 2005; Rus et al., 2007; Pitler et al., 2010). We first resolve coordinate NP ambiguity in a word-aligned parallel corpus. In bitexts, both monolingual and bilingual information can indicate NP structure. We create separate classifiers using monolingual and bilingual feature views. We train the two classifiers using co-training, iteratively improving the accuracy of one classifier by learning from the predictions of the other. Starting from only two 1346 initial labeled examples, we are able to train a highly accurate classifier using only monolingual features. The monolingual classifier can then be used both within and beyond the aligned bitext. In particular, it achieves close to 96% accuracy on both bitext data and on out-of-domain examples in the Treebank. 2 Problem Definition and Related Tasks Our system operates over a part-of-speech tagged input corpus. We attempt to resolve the ambiguity in all tag sequences matching the expression: [DT|PRP$] (N.*|J.*) and [DT|PRP$] (N.*|J.*) N.* e.g. [the] rocket\w1 and [the] mortar\w2 attacks\h Each example ends with a noun, h. Preceding h are a pair of possibly-conjoined words, w1 and w2, either nouns (rocket and mortar), adjectives, or a mix of the two. We allow determiners or possessive pronouns before w1 and/or w2. This pattern is very common. Depending on the domain, we find it in roughly one of every 10 to 20 sentences. We merge identical matches in our corpus into a single example for labeling. Roughly 38% of w1,w2 pairs are both adjectives, 26% are nouns, and 36% are mixed. The task is to determine whether w1 and w2 are conjoined or not. When they are not conjoined, there are two cases: 1) w1 is actually conjoined with w2 h as a whole (e.g. asbestos and polyvinyl chloride), or 2) The conjunction links something higher up in the parse tree, as in, “farmers are getting older\w1 and younger\w2 people\h are reluctant to take up farming.” Here, and links two separate clauses. Our task is both narrower and broader than previous work. It is broader than previous approaches that have focused only on conjoined nouns (Resnik, 1999; Nakov and Hearst, 2005). Although pairs of adjectives are usually conjoined (and mixed tags are usually not), this is not always true, as in older/younger above. For comparison, we also state accuracy on the noun-only examples (§ 8). Our task is more narrow than the task tackled by full-sentence parsers, but most parsers do not bracket NP-internal structure at all, since such structure is absent from the primary training corpus for statistical parsers, the Penn Treebank (Marcus et al., 1993). We confirm that standard broad-coverage parsers perform poorly on our task (§ 7). Vadas and Curran (2007a) manually annotated NP structure in the Penn Treebank, and a few custom NP parsers have recently been developed using this data (Vadas and Curran, 2007b; Pitler et al., 2010). Our task is more narrow than the task handled by these parsers since we do not handle other, less-frequent and sometimes more complex constructions (e.g. robot arms and legs). However, such constructions are clearly amenable to our algorithm. In addition, these parsers have only evaluated coordination resolution within base NPs, simplifying the task and rendering the aforementioned older/younger problem moot. Finally, these custom parsers have only used simple count features; for example, they have not used the paraphrases we describe below. 3 Supervised Coordination Resolution We adopt a discriminative approach to resolving coordinate NP ambiguity. For each unique coordinate NP in our corpus, we encode relevant information in a feature vector, ¯x. A classifier scores these vectors with a set of learned weights, ¯w. We assume N labeled examples {(y1, ¯x1), ..., (yN, ¯xN)} are available to train the classifier. We use ‘y = 1’ as the class label for NPs with ellipsis and ‘y = 0’ for NPs without. Since our particular task requires a binary decision, any standard learning algorithm can be used to learn the feature weights on the training data. We use (regularized) logistic regression (a.k.a. maximum entropy) since it has been shown to perform well on a range of NLP tasks, and also because its probabilistic interpretation is useful for co-training (§ 4). In binary logistic regression, the probability of a positive class takes the form of the logistic function: Pr(y = 1) = exp( ¯w · ¯x) 1 + exp( ¯w · ¯x) Ellipsis is predicted if Pr(y = 1) > 0.5 (equivalently, ¯w · ¯x > 0), otherwise we predict no ellipsis. Supervised classifiers easily incorporate a range of interdependent information into a learned decision function. The cost for this flexibility is typically the need for labeled training data. The more features we use, the more labeled data we need, since for linear classifiers, the number of examples needed to reach optimum performance is at most linear in the 1347 Phrase Evidence Pattern dairy and meat English: ... production of dairy and meat... h of w1 and w2 production English: ... dairy production and meat production... w1 h and w2 h (ellipsis) English: ... meat and dairy production... w2 and w1 h Spanish: ... producci´on l´actea y c´arnica... h w1 ... w2 →production dairy and meat Finnish: ... maidon- ja lihantuotantoon... w1- ... w2h →dairy- and meatproduction French: ... production de produits laitiers et de viande... h ... w1 ... w2 →production of products dairy and of meat asbestos and English: ... polyvinyl chloride and asbestos... w2 h and w1 polyvinyl English: ... asbestos , and polyvinyl chloride... w1 , and w2 h chloride English: ... asbestos and chloride... w1 and h (no ellipsis) Portuguese: ... o amianto e o cloreto de polivinilo... w1 ... h ... w2 →the asbestos and the chloride of polyvinyl Italian: ... l’ asbesto e il polivinilcloruro... w1 ... w2h →the asbestos and the polyvinylchloride Table 1: Monolingual and bilingual evidence for ellipsis or lack-of-ellipsis in coordination of [w1 and w2 h] phrases. number of features (Vapnik, 1998). In § 4, we propose a way to circumvent the need for labeled data. We now describe the particular monolingual and bilingual information we use for this problem. We refer to Table 1 for canonical examples of the two classes and also to provide intuition for the features. 3.1 Monolingual Features Count features These real-valued features encode the frequency, in a large auxiliary corpus, of relevant word sequences. Co-occurrence frequencies have long been used to resolve linguistic ambiguities (Dagan and Itai, 1990; Hindle and Rooth, 1993; Lauer, 1995). With the massive volumes of raw text now available, we can look for very specific and indicative word sequences. Consider the phrase dairy and meat production (Table 1). A high count in raw text for the paraphrase “production of dairy and meat” implies ellipsis in the original example. In the third column of Table 1, we suggest a pattern that generalizes the particular piece of evidence. It is these patterns and other English paraphrases that we encode in our count features (Table 2). We also use (but do not list) count features for the four paraphrases proposed in Nakov and Hearst (2005, § 3.2.3). Such specific paraphrases are more common than one might think. In our experiments, at least 20% of examples have non-zero counts for a 5-gram pattern, while over 70% of examples have counts for a 4-gram pattern. Our features also include counts for subsequences of the full phrase. High counts for “dairy production” alone or just “dairy and meat” also indicate ellipsis. On the other hand, like Pitler et al. (2010), we have a feature for the count of “dairy and production.” Frequent conjoining of w1 and h is evidence that there is no ellipsis, that w1 and h are compatible and heads of two separate and conjoined NPs. Many of our patterns are novel in that they include commas or determiners. The presence of these often indicate that there are two separate NPs. E.g. seeing asbestos , and polyvinyl chloride or the asbestos and the polyvinyl chloride suggests no ellipsis. We also propose patterns that include left-andright context around the NP. These aim to capture salient information about the NP’s distribution as an entire unit. Finally, patterns involving prepositions look for explicit paraphrasing of the nominal relations; the presence of “h PREP w1 and w2” in a corpus would suggest ellipsis in the original NP. In total, we have 48 separate count features, requiring counts for 315 distinct N-grams for each example. We use log-counts as the feature value, and use a separate binary feature to indicate if a particular count is zero. We efficiently acquire the counts using custom tools for managing web-scale N-gram 1348 Real-valued count features. C(p) →count of p C(w1) C(w2) C(h) C(w1 CC w2) C(w1 h) C(w2 h) C(w2 CC w1) C(w1 CC h) C(h CC w1) C(DT w1 CC w2) C(w1 , CC w2) C(DT w2 CC w1) C(w2 , CC w1) C(DT w1 CC h) C(w1 CC w2 ,) C(DT h CC w1) C(w2 CC w1 ,) C(DT w1 and DT w2) C(w1 CC DT w2) C(DT w2 and DT w1) C(w2 CC DT w1) C(DT h and DT w1) C(w1 CC DT h) C(DT h and DT w2) C(h CC DT w1) C(⟨L-CTXTi⟩w1 and w2 h) C(w1 CC w2 h) C(w1 and w2 h ⟨R-CTXTi⟩) C(h PREP w1) C(h PREP w1 CC w2) C(h PREP w2) Count feature filler sets DT = {the, a, an, its, his} CC = {and, or, ‘,’} PREP = {of, for, in, at, on, from, with, about} Binary features and feature templates →{0, 1} wrd1=⟨wrd(w1)⟩ tag1=⟨tag(w1)⟩ wrd2=⟨wrd(w2)⟩ tag2=⟨tag(w2)⟩ wrdh=⟨wrd(h)⟩ tagh=⟨tag(h)⟩ wrd12=⟨wrd(w1),wrd(w2)⟩ wrd(w1)=wrd(w2) tag12=⟨tag(w1),tag(w2)⟩ tag(w1)=tag(w2) tag12h=⟨tag(w1),tag(w1),tag(h)⟩ Table 2: Monolingual features. For counts using the filler sets CC, DT and PREP, counts are summed across all filler combinations. In contrast, feature templates are denoted with ⟨·⟩, where the feature label depends on the ⟨bracketed argument⟩. E.g., we have separate count feature for each item in the L/R context sets, where {L-CTXT} = {with, and, as, including, on, is, are, &}, {R-CTXT} = {and, have, of, on, said, to, were, &} data (§ 5). Previous approaches have used search engine page counts as substitutes for co-occurrence information (Nakov and Hearst, 2005; Rus et al., 2007). These approaches clearly cannot scale to use the wide range of information used in our system. Binary features Table 2 gives the binary features and feature templates. These are templates in the sense that every unique word or tag fills the template and corresponds to a unique feature. We can thus learn if particular words or tags are associated with ellipsis. We also include binary features to flag the presence of any optional determiners before w1 or w2. We also have binary features for the context words that precede and follow the tag sequence in the source corpus. These context features are analogous to the L/R-CTXT features that were counted in the auxiliary corpus. Our classifier learns, for examMonolingual: ¯xm Bilingual: ¯xb C(w1):14.4 C(detl=h * w1 * w2),Dutch:1 C(w2):15.4 C(detl=h * * w1 * * w2),Fr.:1 C(h):17.2 C(detl=h w1 h * w2),Greek:1 C(w1 CC w2):9.0 C(detl=h w1 * w2),Spanish:1 C(w1 h):9.8 C(detl=w1- * w2h),Swedish:1 C(w2 h):10.2 C(simp=h w1 w2),Dutch:1 C(w2 CC w1):10.5 C(simp=h w1 w2),French:1 C(w1 CC h):3.5 C(simp=h w1 h w2),Greek:1 C(h CC w1):6.8 C(simp=h w1 w2),Spanish:1 C(DT w2 CC w1:7.8 C(simp=w1 w2h),Swedish:1 C(w1 and w2 h and):2.4 C(span=5),Dutch:1 C(h PREP w1 CC w2):2.6 C(span=7),French:1 wrd1=dairy:1 C(span=5),Greek:1 wrd2=meat:1 C(span=4),Spanish:1 wrdh=production:1 C(span=3),Swedish:1 tag1=NN:1 C(ord=h w1 w2),Dutch:1 tag2=NN:1 C(ord=h w1 w2),French:1 tagh=NN:1 C(ord=h w1 h w2),Greek:1 wrd12=dairy,meat:1 C(ord=h w1 w2),Spanish:1 tag12=NN,NN:1 C(ord=w1 w2 h),Swedish:1 tag(w1)=tag(w2):1 C(ord=h w1 w2):4 tag12h=NN,NN,NN:1 C(ord=w1 w2 h):1 Table 3: Example of actual instantiated feature vectors for dairy and meat production (in label:value format). Monolingual feature vector, ¯xm, on the left (both count and binary features, see Table 2), Bilingual feature vector, ¯xb, on the right (see Table 4). ple, that instances preceded by the words its and in are likely to have ellipsis: these words tend to precede single NPs as opposed to conjoined NP pairs. Example Table 3 provides part of the actual instantiated monolingual feature vector for dairy and meat production. Note the count features have logarithmic values, while only the non-zero binary features are included. A later stage of processing extracts a list of feature labels from the training data. This list is then used to map feature labels to integers, yielding the standard (sparse) format used by most machine learning software (e.g., 1:14.4 2:15.4 3:17.2 ... 7149:1 24208:1). 3.2 Bilingual Features The above features represent the best of the information available to a coordinate NP classifier when operating on an arbitrary text. In some domains, however, we have additional information to inform our decisions. We consider the case where we seek to predict coordinate structure in parallel text: i.e., English text with a corresponding translation in one 1349 or more target languages. A variety of mature NLP tools exists in this domain, allowing us to robustly align the parallel text first at the sentence and then at the word level. Given a word-aligned parallel corpus, we can see how the different types of coordinate NPs are translated in the target languages. In Romance languages, examples with ellipsis, such as dairy and meat production (Table 1), tend to correspond to translations with the head in the first position, e.g. “producci´on l´actea y c´arnica” in Spanish (examples taken from Europarl (Koehn, 2005)). When there is no ellipsis, the head-first syntax leads to the “w1 and h w2” ordering, e.g. amianto e o cloreto de polivinilo in Portuguese. Another clue for ellipsis is the presence of a dangling hyphen, as in the Finnish maidon- ja lihantuotantoon. We find such hyphens especially common in Germanic languages like Dutch. In addition to language-specific clues, a translation may resolve an ambiguity by paraphrasing the example in the same way it may be paraphrased in English. E.g., we see hard and soft drugs translated into Spanish as drogas blandas y drogas duras with the head, drogas, repeated (akin to soft drugs and hard drugs in English). One could imagine manually defining the relationship between English NP coordination and the patterns in each language, but this would need to be repeated for each language pair, and would likely miss many useful patterns. In contrast, by representing the translation patterns as features in a classifier, we can instead automatically learn the coordinationtranslation correspondences, in any language pair. For each occurrence of a coordinate NP in a wordaligned bitext, we inspect the alignments and determine the mapping of w1, w2 and h. Recall that each of our examples represents all the occurrences of a unique coordinate NP in a corpus. We therefore aggregate translation information over all the occurrences. Since the alignments in automaticallyaligned parallel text are noisy, the more occurrences we have, the more translations we have, and the more likely we are to make a correct decision. For some common instances in Europarl, like Agriculture and Rural Development, we have thousands of translations in several languages. Table 4 provides the bilingual feature templates. The notation indicates that, for a given coordinate NP, we count the frequency of each translaC⟨detl(w1,w2,h)⟩,⟨LANG⟩ C⟨simp(w1,w2,h)⟩,⟨LANG⟩ C⟨span(w1,w2,h)⟩,⟨LANG⟩ C⟨ord(w1,w2,h)⟩,⟨LANG⟩ C⟨ord(w1,w2,h)⟩ Table 4: Real-valued bilingual feature templates. The shorthand is detl=“detailed pattern,” simp=“simple pattern,” span=“span of pattern,” ord=“order of words.” The notation C⟨p⟩,⟨LANG⟩means the number of times we see the pattern (or span) ⟨p⟩as the aligned translation of the coordinate NP in the target language ⟨LANG⟩. tion pattern in each target language, and generate real-valued features for these counts. The feature counts are indexed to the particular pattern and language. We also have one language-independent feature, C⟨ord(w1,w2,h)⟩, which gives the frequency of each ordering across all languages. The span is the number of tokens collectively spanned by the translations of w1, w2 and h. The “detailed pattern” represents the translation using wildcards for all other foreign words, but maintains punctuation. Letting ‘*’ stand for the wildcard, the detailed patterns for the translations of dairy and meat production in Table 1 would be [h w1 * w2] (Spanish), [w1- * w2h] (Finnish) and [h * * w1 * * w2] (French). Four or more consecutive wildcards are converted to ‘...’. For the “simple pattern,” we remove the wildcards and punctuation. Note that our aligner allows the English word to map to multiple target words. The simple pattern differs from the ordering in that it denotes how many tokens each of w1, w2 and h span. Example Table 3 also provides part of the actual instantiated bilingual feature vector for dairy and meat production. 4 Bilingual Co-training We exploit the orthogonality of the monolingual and bilingual features using semi-supervised learning. These features are orthogonal in the sense that they look at different sources of information for each example. If we had enough training data, a good classifier could be trained using either monolingual or bilingual features on their own. With classifiers trained on even a little labeled data, it’s feasible that for a particular example, the monolingual classifier might be confident when the bilingual classifier is 1350 Algorithm 1 The bilingual co-training algorithm: subscript m corresponds to monolingual, b to bilingual Given: • a set L of labeled training examples in the bitext, {(¯xi, yi)} • a set U of unlabeled examples in the bitext, {¯xj} • hyperparams: k (num. iterations), um and ub (size smaller unlabeled pools), nm and nb (num. new labeled examples each iteration), C: regularization param. for classifier training Create Lm ←L Create Lb ←L Create a pool Um by choosing um examples randomly from U. Create a pool Ub by choosing ub examples randomly from U. for i = 0 to k do Use Lm to train a classifier hm using only ¯xm, the monolingual features of ¯x Use Lb to train a classifier hb using only ¯xb, the bilingual features of ¯x Use hm to label Um, move the nm most-confident examples to Lb Use hb to label Ub, move the nb most-confident examples to Lm Replenish Um and Ub randomly from U with nm and nb new examples end for uncertain, and vice versa. This suggests using a co-training approach (Yarowsky, 1995; Blum and Mitchell, 1998). We train separate classifiers on the labeled data. We use the predictions of one classifier to label new examples for training the orthogonal classifier. We iterate this training and labeling. We outline how this procedure can be applied to bitext data in Algorithm 1 (above). We follow prior work in drawing predictions from smaller pools, Um and Ub, rather than from U itself, to ensure the labeled examples “are more representative of the underlying distribution” (Blum and Mitchell, 1998). We use a logistic regression classifier for hm and hb. Like Blum and Mitchell (1998), we also create a combined classifier by making predictions according to argmaxy=1,0 Pr(y|xm)Pr(y|xb). The hyperparameters of the algorithm are 1) k, the number of iterations, 2) um and ub, the size of the smaller unlabeled pools, 3) nm and nb, the number of new labeled examples to include at each iteration, and 4) the regularization parameter of the logistic regression classifier. All such parameters can be tuned on a development set. Like Blum and Mitchell (1998), we ensure that we maintain roughly the true class balance in the labeled examples added at each iteration; we also estimate this balance using development data. There are some differences between our approach and the co-training algorithm presented in Blum and Mitchell (1998, Table 1). One of our key goals is to produce an accurate classifier that uses only monolingual features, since only this classifier can be applied to arbitrary monolingual text. We thus break the symmetry in the original algorithm and allow hb to label more examples for hm than vice versa, so that hm will improve faster. This is desirable because we don’t have unlimited unlabeled examples to draw from, only those found in our parallel text. 5 Data Web-scale text data is used for monolingual feature counts, parallel text is used for classifier co-training, and labeled data is used for training and evaluation. Web-scale N-gram Data We extract our counts from Google V2: a new N-gram corpus (with N-grams of length one-to-five) created from the same one-trillion-word snapshot of the web as the Google 5-gram Corpus (Brants and Franz, 2006), but with enhanced filtering and processing of the source text (Lin et al., 2010, Section 5). We get counts using the suffix array tools described in (Lin et al., 2010). We add one to all counts for smoothing. Parallel Data We use the Danish, German, Greek, Spanish, Finnish, French, Italian, Dutch, Portuguese, and Swedish portions of Europarl (Koehn, 2005). We also use the Czech, German, Spanish and French news commentary data from WMT 1351 2010.1 Word-aligned English-Foreign bitexts are created using the Berkeley aligner.2 We run 5 iterations of joint IBM Model 1 training, followed by 3to-5 iterations of joint HMM training, and align with the competitive-thresholding heuristic. The English portions of all bitexts are part-of-speech tagged with CRFTagger (Phan, 2006). 94K unique coordinate NPs and their translations are then extracted. Labeled Data For experiments within the parallel text, we manually labeled 1320 of the 94K coordinate NP examples. We use 605 examples to set development parameters, 607 examples as held-out test data, and 2, 10 or 100 examples for training. For experiments on the WSJ portion of the Penn Treebank, we merge the original Treebank annotations with the NP annotations provided by Vadas and Curran (2007a). We collect all coordinate NP sequences matching our pattern and collapse them into a single example. We label these instances by determining whether the annotations have w1 and w2 conjoined. In only one case did the same coordinate NP have different labels in different occurrences; this was clearly an error and resolved accordingly. We collected 1777 coordinate NPs in total, and divided them into 777 examples for training, 500 for development and 500 as a final held-out test set. 6 Evaluation and Settings We evaluate using accuracy: the percentage of examples classified correctly in held-out test data. We compare our systems to a baseline referred to as the Tag-Triple classifier. This classifier has a single feature: the tag(w1), tag(w2), tag(h) triple. Tag-Triple is therefore essentially a discriminative, unlexicalized parser for our coordinate NPs. All classifiers use L2-regularized logistic regression training via LIBLINEAR (Fan et al., 2008). For co-training, we fix regularization at C = 0.1. For all other classifiers, we optimize the C parameter on the development data. At each iteration, i, classifier hm annotates 50 new examples for training hb, from a pool of 750 examples, while hb annotates 50 ∗i new examples for hm, from a pool of 750 ∗i examples. This ensures hm gets the majority of automaticallylabeled examples. 1www.statmt.org/wmt10/translation-task.html 2nlp.cs.berkeley.edu/pages/wordaligner.html 86 88 90 92 94 96 98 100 0 10 20 30 40 50 60 Accuracy (%) Co-training iteration Bilingual View Monolingual View Combined Figure 1: Accuracy on Bitext development data over the course of co-training (from 10 initial seed examples). We also set k, the number of co-training iterations. The monolingual, bilingual, and combined classifiers reach their optimum levels of performance after different numbers of iterations (Figure 1). We therefore set k separately for each, stopping around 16 iterations for the combined, 51 for the monolingual, and 57 for the bilingual classifier. 7 Bitext Experiments We evaluate our systems on our held-out bitext data. The majority class is ellipsis, in 55.8% of examples. For comparison, we ran two publicly-available broad-coverage parsers and analyzed whether they correctly predicted ellipsis. The parsers were the C&C parser (Curran et al., 2007) and Minipar (Lin, 1998). They achieved 78.6% and 77.6%.3 Table 5 shows that co-training results in much more accurate classifiers than supervised training alone, regardless of the features or amount of initial training data. The Tag-Triple system is the weakest system in all cases. This shows that better monolingual features are very important, but semisupervised training can also make a big difference. 3We provided the parsers full sentences containing the NPs. We directly extracted the labels from the C&C bracketing, while for Minipar we checked whether w1 was the head of w2. Of course, the parsers performed very poorly on ellipsis involving two nouns (partly because NP structure is absent from their training corpora (see § 2 and also Vadas and Curran (2008)), but neither exceeded 88% on adjective or mixed pairs either. 1352 # of Examples System 2 10 100 Tag-Triple classifier 67.4 79.1 82.9 Monolingual classifier 69.9 90.8 91.6 Co-trained Mono. classifier 96.4 95.9 96.0 Relative error reduction via co-training 88% 62% 52% Bilingual classifier 76.8 85.5 92.1 Co-trained Bili. classifier 93.2 93.2 93.9 Relative error reduction via co-training 71% 53% 23% Mono.+Bili. classifier 69.9 91.4 94.9 Co-trained Combo classifier 96.7 96.7 96.7 Relative error reduction via co-training 89% 62% 35% Table 5: Co-training improves accuracy (%) over standard supervised learning on Bitext test data for different feature types and number of training examples. System Accuracy ∆ Monolingual alone 91.6 + Bilingual 94.9 39% + Co-training 96.0 54% + Bilingual & Co-training 96.7 61% Table 6: Net benefits of bilingual features and co-training on Bitext data, 100-training-example setting. ∆= relative error reduction over Monolingual alone. Table 6 shows the net benefit of our main contributions. Bilingual features clearly help on this task, but not as much as co-training. With bilingual features and co-training together, we achieve 96.7% accuracy. This combined system could be used to very accurately resolve coordinate ambiguity in parallel data prior to training an MT system. 8 WSJ Experiments While we can now accurately resolve coordinate NP ambiguity in parallel text, it would be even better if this accuracy carried over to new domains, where bilingual features are not available. We test the robustness of our co-trained monolingual classifier by evaluating it on our labeled WSJ data. The Penn Treebank and the annotations added by Vadas and Curran (2007a) comprise a very special corpus; such data is clearly not available in every domain. We can take advantage of the plentiful labeled examples to also test how our co-trained system compares to supervised systems trained with inSystem Training WSJ Acc. Set # Nouns All Nakov & Hearst 79.2 84.8 Tag-Triple WSJ 777 76.1 82.4 Pitler et al. WSJ 777 92.3 92.8 MonoWSJ WSJ 777 92.3 94.4 Co-trained Bitext 2 93.8 95.6 Table 7: Coordinate resolution accuracy (%) on WSJ. domain labeled examples, and also other systems, like Nakov and Hearst (2005), which although unsupervised, are tuned on WSJ data. We reimplemented Nakov and Hearst (2005)4 and Pitler et al. (2010)5 and trained the latter on WSJ annotations. We compare these systems to Tag-Triple and also to a supervised system trained on the WSJ using only our monolingual features (MonoWSJ). The (out-of-domain) bitext co-trained system is the best system on the WSJ data, both on just the examples where w1 and w2 are nouns (Nouns), and on all examples (All) (Table 7).6 It is statistically significantly better than the prior state-of-the-art Pitler et al. system (McNemar’s test, p<0.05) and also exceeds the WSJ-trained system using monolingual features (p<0.2). This domain robustness is less surprising given its key features are derived from webscale N-gram data; such features are known to generalize well across domains (Bergsma et al., 2010). We tried co-training without the N-gram features, and performance was worse on the WSJ (85%) than supervised training on WSJ data alone (87%). 9 Related Work Bilingual data has been used to resolve a range of ambiguities, from PP-attachment (Schwartz et al., 2003; Fossum and Knight, 2008), to distinguishing grammatical roles (Schwarck et al., 2010), to full dependency parsing (Huang et al., 2009). Related 4Nakov and Hearst (2005) use an unsupervised algorithm that predicts ellipsis on the basis of a majority vote over a number of pattern counts and established heuristics. 5Pitler et al. (2010) uses a supervised classifier to predict bracketings; their count and binary features are a strict subset of the features used in our Monolingual classifier. 6For co-training, we tuned k on the WSJ dev set but left other parameters the same. We start from 2 training instances; results were the same or slightly better with 10 or 100 instances. 1353 work has also focused on projecting syntactic annotations from one language to another (Yarowsky and Ngai, 2001; Hwa et al., 2005), and jointly parsing the two sides of a bitext by leveraging the alignments during training and testing (Smith and Smith, 2004; Burkett and Klein, 2008) or just during training (Snyder et al., 2009). None of this work has focused on coordination, nor has it combined bitexts with web-scale monolingual information. Most prior work has focused on leveraging the alignments between a single pair of languages. Dagan et al. (1991) first articulated the need for “a multilingual corpora based system, which exploits the differences between languages to automatically acquire knowledge about word senses.” Kuhn (2004) used alignments across several Europarl bitexts to devise rules for identifying parse distituents. Bannard and Callison-Burch (2005) used multiple bitexts as part of a system for extracting paraphrases. Our co-training algorithm is well suited to using multiple bitexts because it automatically learns the value of alignment information in each language. In addition, our approach copes with noisy alignments both by aggregating information across languages (and repeated occurrences within a language), and by only selecting the most confident examples at each iteration. Burkett et al. (2010) also proposed exploiting monolingual-view and bilingualview predictors. In their work, the bilingual view encodes the per-instance agreement between monolingual predictors in two languages, while our bilingual view encodes the alignment and target text together, across multiple instances and languages. The other side of the coin is the use of syntax to perform better translation (Wu, 1997). This is a rich field of research with its own annual workshop (Syntax and Structure in Translation). Our monolingual model is most similar to previous work using counts from web-scale text, both for resolving coordination ambiguity (Nakov and Hearst, 2005; Rus et al., 2007; Pitler et al., 2010), and for syntax and semantics in general (Lapata and Keller, 2005; Bergsma et al., 2010). We do not currently use semantic similarity (either taxonomic (Resnik, 1999) or distributional (Hogan, 2007)) which has previously been found useful for coordination. Our model can easily include such information as additional features. Adding new features without adding new training data is often problematic, but is promising in our framework, since the bitexts provide so much indirect supervision. 10 Conclusion Resolving coordination ambiguity is hard. Parsers are reporting impressive numbers these days, but coordination remains an area with room for improvement. We focused on a specific subcase, complex NPs, and introduced a new evaluation set. We achieved a huge performance improvement from 79% for state-of-the-art parsers to 96%.7 Size matters. Most parsers are trained on a mere million words of the Penn Treebank. In this work, we show how to take advantage of billions of words of bitexts and trillions of words of unlabeled monolingual text. Larger corpora make it possible to use associations among lexical items (compare dairy production vs. asbestos chloride) and precise paraphrases (production of dairy and meat). Bitexts are helpful when the ambiguity can be resolved by some feature in another language (such as word order). The Treebank is convenient for supervised training because it has annotations. We show that even without such annotations, high-quality supervised models can be trained using co-training and features derived from huge volumes of unlabeled data. References Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Proc. ACL, pages 597–604. Shane Bergsma, Emily Pitler, and Dekang Lin. 2010. Creating robust supervised classifiers via web-scale ngram data. In Proc. ACL, pages 865–874. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proc. COLT, pages 92–100. Thorsten Brants and Alex Franz. 2006. The Google Web 1T 5-gram Corpus Version 1.1. LDC2006T13. David Burkett and Dan Klein. 2008. Two languages are better than one (for syntactic parsing). In Proc. EMNLP, pages 877–886. David Burkett, Slav Petrov, John Blitzer, and Dan Klein. 2010. Learning better monolingual models with unannotated bilingual text. In Proc. CoNLL, pages 46–53. 7Evaluation scripts and data are available online: www.clsp.jhu.edu/∼sbergsma/coordNP.ACL11.zip 1354 James Curran, Stephen Clark, and Johan Bos. 2007. Linguistically motivated large-scale NLP with C&C and Boxer. In Proc. ACL Demo and Poster Sessions, pages 33–36. Ido Dagan and Alan Itai. 1990. Automatic processing of large corpora for the resolution of anaphora references. In Proc. COLING, pages 330–332. Ido Dagan, Alon Itai, and Ulrike Schwall. 1991. Two languages are more informative than one. In Proc. ACL, pages 130–137. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. JMLR, 9:1871– 1874. Victoria Fossum and Kevin Knight. 2008. Using bilingual Chinese-English word alignments to resolve PPattachment ambiguity in English. In Proc. AMTA Student Workshop, pages 48–53. Donald Hindle and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19(1):103–120. Deirdre Hogan. 2007. Coordinate noun phrase disambiguation in a generative parsing model. In Proc. ACL, pages 680–687. Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proc. EMNLP, pages 1222–1231. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11(3):311–325. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. MT Summit X. Jonas Kuhn. 2004. Experiments in parallel-text based grammar induction. In Proc. ACL, pages 470–477. Mirella Lapata and Frank Keller. 2005. Web-based models for natural language processing. ACM Trans. Speech and Language Processing, 2(1):1–31. Mark Lauer. 1995. Corpus statistics meet the noun compound: Some empirical results. In Proc. ACL, pages 47–54. Dekang Lin, Kenneth Church, Heng Ji, Satoshi Sekine, David Yarowsky, Shane Bergsma, Kailash Patil, Emily Pitler, Rachel Lathbury, Vikram Rao, Kapil Dalwani, and Sushant Narsale. 2010. New tools for web-scale N-grams. In Proc. LREC. Dekang Lin. 1998. Dependency-based evaluation of MINIPAR. In Proc. LREC Workshop on the Evaluation of Parsing Systems. Mitchell P. Marcus, Beatrice Santorini, and Mary Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Preslav Nakov and Marti Hearst. 2005. Using the web as an implicit training set: application to structural ambiguity resolution. In Proc. HLT-EMNLP, pages 17–24. Xuan-Hieu Phan. 2006. CRFTagger: CRF English POS Tagger. crftagger.sourceforge.net. Emily Pitler, Shane Bergsma, Dekang Lin, and Kenneth Church. 2010. Using web-scale N-grams to improve base NP parsing performance. In In Proc. COLING, pages 886–894. Philip Resnik. 1999. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of Artificial Intelligence Research, 11:95–130. Vasile Rus, Sireesha Ravi, Mihai C. Lintean, and Philip M. McCarthy. 2007. Unsupervised method for parsing coordinated base noun phrases. In Proc. CICLing, pages 229–240. Florian Schwarck, Alexander Fraser, and Hinrich Sch¨utze. 2010. Bitext-based resolution of German subject-object ambiguities. In Proc. HLT-NAACL, pages 737–740. Lee Schwartz, Takako Aikawa, and Chris Quirk. 2003. Disambiguation of English PP attachment using multilingual aligned data. In Proc. MT Summit IX, pages 330–337. David A. Smith and Noah A. Smith. 2004. Bilingual parsing with factored estimation: Using English to parse Korean. In Proc. EMNLP, pages 49–56. Benjamin Snyder, Tahira Naseem, and Regina Barzilay. 2009. Unsupervised multilingual grammar induction. In Proc. ACL-IJCNLP, pages 1041–1050. David Vadas and James R. Curran. 2007a. Adding noun phrase structure to the Penn Treebank. In Proc. ACL, pages 240–247. David Vadas and James R. Curran. 2007b. Large-scale supervised models for noun phrase bracketing. In PACLING, pages 104–112. David Vadas and James R. Curran. 2008. Parsing noun phrase structure with CCG. In Proc. ACL, pages 104– 112. Vladimir N. Vapnik. 1998. Statistical Learning Theory. John Wiley & Sons. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. David Yarowsky and Grace Ngai. 2001. Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora. In Proc. NAACL, pages 1–8. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. ACL, pages 189–196. 1355
2011
135
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1356–1364, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Unsupervised Decomposition of a Document into Authorial Components Moshe Koppel Navot Akiva Idan Dershowitz Nachum Dershowitz Dept. of Computer Science Dept. of Bible School of Computer Science Bar-Ilan University Hebrew University Tel Aviv University Ramat Gan, Israel Jerusalem, Israel Ramat Aviv, Israel {moishk,navot.akiva}@gmail.com [email protected] [email protected] Abstract We propose a novel unsupervised method for separating out distinct authorial components of a document. In particular, we show that, given a book artificially “munged” from two thematically similar biblical books, we can separate out the two constituent books almost perfectly. This allows us to automatically recapitulate many conclusions reached by Bible scholars over centuries of research. One of the key elements of our method is exploitation of differences in synonym choice by different authors. 1 Introduction We propose a novel unsupervised method for separating out distinct authorial components of a document. There are many instances in which one is faced with a multi-author document and wishes to delineate the contributions of each author. Perhaps the most salient example is that of documents of historical significance that appear to be composites of multiple earlier texts. The challenge for literary scholars is to tease apart the document’s various components. More contemporary examples include analysis of collaborative online works in which one might wish to identify the contribution of a particular author for commercial or forensic purposes. We treat two versions of the problem. In the first, easier, version, the document to be decomposed is given to us segmented into units, each of which is the work of a single author. The challenge is only to cluster the units according to author. In the second version, we are given an unsegmented document and the challenge includes segmenting the document as well as clustering the resulting units. We assume here that no information about the authors of the document is available and that in particular we are not supplied with any identified samples of any author’s writing. Thus, our methods must be entirely unsupervised. There is surprisingly little literature on this problem, despite its importance. Some work in this direction has been done on intrinsic plagiarism detection (e.g., Meyer zu Eisen 2006) and document outlier detection (e.g., Guthrie et al. 2008), but this work makes the simplifying assumption that there is a single dominant author, so that outlier units can be identified as those that deviate from the document as a whole. We don’t make this simplifying assumption. Some work on a problem that is more similar to ours was done by Graham et al. (2005). However, they assume that examples of pairs of paragraphs labeled as sameauthor/different-author are available for use as the basis of supervised learning. We make no such assumption. The obvious approach to our unsupervised version of the problem would be to segment the text (if necessary), represent each of the resulting units of text as a bag-of-words, and then use clustering algorithms to find natural clusters. We will see, however, that this naïve method is quite inadequate. Instead, we exploit a method favored by the literary scholar, namely, the use of synonym choice. Synonym choice proves to be far more useful for authorial decomposition than ordinary lexical features. However, synonyms are relatively 1356 sparse and hence, though reliable, they are not comprehensive; that is, they are useful for separating out some units but not all. Thus, we use a twostage process: first find a reliable partial clustering based on synonym usage and then use these as the basis for supervised learning using a different feature set, such as bag-of-words. We use biblical books as our testbed. We do this for two reasons. First, this testbed is well motivated, since scholars have been doing authorial analysis of biblical literature for centuries. Second, precisely because it is of great interest, the Bible has been manually tagged in a variety of ways that are extremely useful for our method. Our main result is that given artificial books constructed by randomly “munging” together actual biblical books, we are able to separate out authorial components with extremely high accuracy, even when the components are thematically similar. Moreover, our automated methods recapitulate many of the results of extensive manual research in authorial analysis of biblical literature. The structure of the paper is as follows. In the next section, we briefly review essential information regarding our biblical testbed. In Section 3, we introduce a naïve method for separating components and demonstrate its inadequacy. In Section 4, we introduce the synonym method, in Section 5 we extend it to the two-stage method, and in Section 6, we offer systematic empirical results to validate the method. In Section 7, we extend our method to handle documents that have not been presegmented and present more empirical results. In Section 8, we suggest conclusions, including some implications for Bible scholarship. 2 The Bible as Testbed While the biblical canon differs across religions and denominations, the common denominator consists of twenty-odd books and several shorter works, ranging in length from tens to thousands of verses. These works vary significantly in genre, and include historical narrative, law, prophecy, and wisdom literature. Some of these books are regarded by scholars as largely the product of a single author’s work, while others are thought to be composites in which multiple authors are wellrepresented – authors who in some cases lived in widely disparate periods. In this paper, we will focus exclusively on the Hebrew books of the Bible, and we will work with the original untranslated texts. The first five books of the Bible, collectively known as the Pentateuch, are the subject of much controversy. According to the predominant Jewish and Christian traditions, the five books were written by a single author – Moses. Nevertheless, scholars have found in the Pentateuch what they believe are distinct narrative and stylistic threads corresponding to multiple authors. Until now, the work of analyzing composite texts has been done in mostly impressionistic fashion, whereby each scholar attempts to detect the telltale signs of multiple authorship and compilation. Some work on biblical authorship problems within a computational framework has been attempted, but does not handle our problem. Much earlier work (for example, Radday 1970; Bee 1971; Holmes 1994) uses multivariate analysis to test whether the clusters in a given clustering of some biblical text are sufficiently distinct to be regarded as probably a composite text. By contrast, our aim is to find the optimal clustering of a document, given that it is composite. Crucially, unlike that earlier work, we empirically prove the efficacy of our methods by testing it against known ground truth. Other computational work on biblical authorship problems (Mealand 1995; Berryman et al. 2003) involves supervised learning problems where some disputed text is to be attributed to one of a set of known authors. The supervised authorship attribution problem has been well-researched (for surveys, see Juola (2008), Koppel et al. (2009) and Stamatatos (2009)), but it is quite distinct from the unsupervised problem we consider here. Since our problem has been dealt with almost exclusively using heuristic methods, the subjective nature of such research has left much room for debate. We propose to set this work on a firm algorithmic basis by identifying an optimal stylistic subdivision of the text. We do not concern ourselves with how or why such distinct threads exist. Those for whom it is a matter of faith that the Pentateuch is not a composition of multiple writers can view the distinction investigated here as that of multiple styles. 3 A Naïve Algorithm For expository purposes, we will use a canonical example to motivate and illustrate each of a 1357 sequence of increasingly sophisticated algorithms for solving the decomposition problem. Jeremiah and Ezekiel are two roughly contemporaneous books belonging to the same biblical sub-genre (prophetic works), and each is widely thought to consist primarily of the work of a single distinct author. Jeremiah consists of 52 chapters and Ezekiel consists of 48 chapters. For our first challenge, we are given all 100 unlabeled chapters and our task is to separate them out into the two constituent books. (For simplicity, let’s assume that it is known that there are exactly two natural clusters.) Note that this is a pre-segmented version of the problem since we know that each chapter belongs to only one of the books. As a first try, the basics of which will serve as a foundation for more sophisticated attempts, we do the following: 1. Represent each chapter as a bag-of-words (using all words that appear at least k times in the corpus). 2. Compute the similarity of every pair of chapters in the corpus. 3. Use a clustering algorithm to cluster the chapters into two clusters. We use k=2, cosine similarity and ncut clustering (Dhillon et al. 2004). Comparing the JeremiahEzekiel split to the clusters thus obtained, we have the following matrix: Book Cluster I Cluster II Jer Eze 29 28 23 20 As can be seen, the clusters are essentially orthogonal to the Jeremiah-Ezekiel split. Ideally, 100% of the chapters would lie on the majority diagonal, but in fact only 51% do. Formally, our measure of correspondence between the desired clustering and the actual one is computed by first normalizing rows and then computing the weight of the majority diagonal relative to the whole. This measure, which we call normalized majority diagonal (NMD), runs from 50% (when the clusters are completely orthogonal to the desired split) to 100% (where the clusters are identical with the desired split). NMD is equivalent to maximal macro-averaged recall where the maximum is taken over the (two) possible assignments of books to clusters. In this case, we obtain an NMD of 51.5%, barely above the theoretical minimum. This negative result is not especially surprising since there are many ways for the chapters to split (e.g., according to thematic elements, sub-genre, etc.) and we can’t expect an unsupervised method to read our minds. Thus, to guide the method in the direction of stylistic elements that might distinguish between Jeremiah and Ezekiel, we define a class of generic biblical words consisting of all 223 words that appear at least five times in each of ten different books of the Bible. Repeating our experiment of above, though limiting our feature set to generic biblical words, we obtain the following matrix: Book Cluster I Cluster II Jer Eze 32 28 20 20 As can be seen, using generic words yields NMD of 51.3%, which does not improve matters at all. Thus, we need to try a different approach. 4 Exploiting Synonym Usage One of the key features used by Bible scholars to classify different components of biblical literature is synonym choice. The underlying hypothesis is that different authorial components are likely to differ in the proportions with which alternative words from a set of synonyms (synset) are used. This hypothesis played a part in the pioneering work of Astruc (1753) on the book of Genesis – using a single synset: divine names – and has been refined by many others using broader feature sets, such as that of Carpenter and Hartford-Battersby (1900). More recently, the synonym hypothesis has been used in computational work on authorship attribution of English texts in the work of Clark and Hannon (2007) and Koppel et al. (2006). This approach presents several technical challenges. First, ideally – in the absence of a sufficiently comprehensive thesaurus – we would wish to identify synonyms in an automated fashion. Second, we need to adapt our similarity measure for reasons that will be made clear below. 4.1 (Almost) Automatic Synset Identification One of the advantages of using biblical literature is the availability of a great deal of manual annotation. In particular, we are able to identify synsets by exploiting the availability of the standard King James translation of the Bible into Eng1358 lish (KJV). Conveniently, and unlike most modern translations, KJV almost invariably translates synonyms identically. Thus, we can generally identify synonyms by considering the translated version of the text. There are two points we need to be precise about. First, it is not actually words that we regard as synonymous, but rather word roots. Second, to be even more precise, it is not quite roots that are synonymous, but rather senses of roots. Conveniently, Strong’s (1890 [2010]) Concordance lists every occurrence of each sense of each root that appears in the Bible separately (where senses are distinguished in accordance with the KJV translation). Thus, we can exploit KJV and the concordance to automatically identify synsets as well as occurrences of the respective synonyms in a synset.1 (The above notwithstanding, there is still a need for a bit of manual intervention: due to polysemy in English, false synsets are occasionally created when two non-synonymous Hebrew words are translated into two senses of the same English word. Although this could probably be handled automatically, we found it more convenient to do a manual pass over the raw synsets and eliminate the problems.) The above procedure yields a set of 529 synsets including a total of 1595 individual synonyms. Most synsets consist of only two synonyms, but some include many more. For example, there are 7 Hebrew synonyms corresponding to “fear”. 4.2 Adapting the Similarity Measure Let’s now represent a unit of text as a vector in the following way. Each entry represents a synonym in one of the synsets. If none of the synonyms in a synset appear in the unit, all their corresponding entries are 0. If j different synonyms in a synset appear in the unit, then each corresponding entry is 1/j and the rest are 0. Thus, in the typical case where exactly one of the synonyms in a synset appears, its corresponding entry in the vector is 1 and the rest are 0. Now we wish to measure the similarity of two such vectors. The usual cosine measure doesn’t capture what we want for the following reason. If the two units use different members of a synset, cosine is diminished; if they use the same members of a synset, cosine is increased. So far, so good. But suppose one unit uses a particular synonym 1 Thanks to Avi Shmidman for his assistance with this. and the other doesn’t use any member of that synset. This should teach us nothing about the similarity of the two units, since it reflects only on the relevance of the synset to the content of that unit; it says nothing about which synonym is chosen when the synset is relevant. Nevertheless, in this case, cosine would be diminished. The required adaptation is as follows: we first eliminate from the representation any synsets that do not appear in both units (where a synset is said to appear in a unit if any of its constituent synonyms appear in the unit). We then compute cosine of the truncated vectors. Formally, for a unit x represented in terms of synonyms, our new similarity measure is cos'(x,y) = cos(x|S(x ∩y),y|S(x ∩y)), where x|S(x ∩y) is the projection of x onto the synsets that appear in both x and y. 4.3 Clustering Jeremiah-Ezekiel Using Synonyms We now apply ncut clustering to the similarity matrix computed as described above. We obtain the following split: Book Cluster I Cluster II Jer Eze 48 5 4 43 Clearly, this is quite a bit better than results obtained using simple lexical features as described above. Intuition for why this works can be purchased by considering concrete examples. There are two Hebrew synonyms – pēʾâh and miqṣôaʿ corresponding to the word “corner”, two (minḥâh and tĕrûmâh) corresponding to the word “oblation”, and two (nāṭaʿ and šāṯal) corresponding to the word “planted”. We find that pēʾâh, minḥâh and nāṭaʿ tend to be located in the same units and, concomitantly, miqṣôaʿ, tĕrûmâh and šāṯal are located in the same units. Conveniently, the former are all Jeremiah and the latter are all Ezekiel. While the above result is far better than those obtained using more naïve feature sets, it is, nevertheless, far from perfect. We have, however, one more trick at our disposal that will improve these results further. 5 Combining Partial Clustering and Supervised Learning Analysis of the above clustering results leads to two observations. First, some of the units belong 1359 firmly to one cluster or the other. The rest have to be assigned to one cluster or the other because that’s the nature of the clustering algorithm, but in fact are not part of what we might think of as the core of either cluster. Informally, we say that a unit is in the core of its cluster if it is sufficiently similar to the centroid of its cluster and it is sufficiently more similar to the centroid of its cluster than to any other centroid. Formally, let S be a set of synsets, let B be a set of units, and let C be a clustering of B where the units in B are represented in terms of the synsets in S. For a unit x in cluster C(x) with centroid c(x), we say that x is in the core of C(x) if cos'(x,c(x))>θ1 and cos'(x,c(x))-cos'(x,c)>θ2 for every centroid c≠c(x). In our experiments below, we use θ1=1/√2 (corresponding to an angle of less than 45 degrees between x and the centroid of its cluster) and θ2=0.1. Second, the clusters that we obtain are based on a subset of the full collection of synsets that does the heavy lifting. Formally, we say that a synonym n in synset s is over-represented in cluster C if p(x∈C|n∈x) > p(x∈C|s∈x) and p(x∈C|n∈x) > p(x∈C). That is, n is over-represented in C if knowing that n appears in a unit increases the likelihood that the unit is in C, relative to knowing only that some member of n’s synset appears in the unit and relative to knowing nothing. We say that a synset s is a separating synset for a clustering {C1,C2} if some synonym in s is over-represented in C1 and a different synonym in s is over-represented in C2. 5.1 Defining the Core of a Cluster We leverage these two observations to formally define the cores of the respective clusters using the following iterative algorithm. 1. Initially, let S be the collection of all synsets, let B be the set of all units in the corpus represented in terms of S, and let {C1,C2} be an initial clustering of the units in B. 2. Reduce B to the cores of C1 and C2. 3. Reduce S to the separating synsets for {C1,C2}. 4. Redefine C1 and C2 to be the clusters obtained from clustering the units in the reduced B represented in terms of the synsets in reduced S. 5. Repeat Steps 2-4 until convergence (no further changes to the retained units and synsets). At the end of this process, we are left with two well-separated cluster cores and a set of separating synsets. When we compute cores of clusters in our Jeremiah-Ezekiel experiment, 26 of the initial 100 units are eliminated. Of the 154 synsets that appear in the Jeremiah-Ezekiel corpus, 118 are separating synsets for the resulting clustering. The resulting cluster cores split with Jeremiah and Ezekiel as follows: Book Cluster I Cluster II Jer Eze 36 2 0 36 We find that all but two of the misplaced units are not part of the core. Thus, we have a better clustering but it is only a partial one. 5.2 Using Cores for Supervised Learning Now that we have what we believe are strong representatives of each cluster, we can use them in a supervised way to classify the remaining unclustered units. The interesting question is which feature set we should use. Using synonyms would just get us back to where we began. Instead we use the set of generic Bible words introduced earlier. The point to recall is that while this feature set proved inadequate in an unsupervised setting, this does not mean that it is inadequate for separating Jeremiah and Ezekiel, given a few good training examples. Thus, we use a bag-of-words representation restricted to generic Bible words for the 74 units in our cluster cores and label them according to the cluster to which they were assigned. We now apply SVM to learn a classifier for the two clusters. We assign each unit, including those in the training set, to the class assigned to it by the SVM classifier. The resulting split is as follows: Book Cluster I Cluster II Jer Eze 51 0 1 48 Remarkably, even the two Ezekiel chapters that were in the Jeremiah cluster (and hence were essentially misleading training examples) end up on the Ezekiel side of the SVM boundary. It should be noted that our two-stage approach to clustering is a generic method not specific to our particular application. The point is that there are some feature sets that are very well suited to a particular unsupervised problem but are sparse, so they give only a partial clustering. At the same time, there are other feature sets that are denser and, possibly for that reason, adequate for super1360 vised separation of the intended classes but inadequate for unsupervised separation of the intended classes. This suggests an obvious two-stage method for clustering, which we use here to good advantage. This method is somewhat reminiscent of semisupervised methods sometimes used in text categorization where few training examples are available (Nigam et al. 2000). However, those methods typically begin with some information, either in the form of a small number of labeled documents or in the form of keywords, while we are not supplied with these. Furthermore, the semi-supervised work bootstraps iteratively, at each stage using features drawn from within the same feature set, while we use exactly two stages, the second of which uses a different type of feature set than the first. For the reader’s convenience, we summarize the entire two-stage method: 1. Represent units in terms of synonyms. 2. Compute similarities of pairs of units using cos'. 3. Use ncut to obtain an initial clustering. 4. Use the iterative method to find cluster cores. 5. Represent units in cluster cores in terms of generic words. 6. Use units in cluster cores as training for learning an SVM classifier. 7. Classify all units according to the learned SVM classifier. 6 Empirical Results We now test our method on other pairs of biblical books to see if we obtain comparable results to those seen above. We need, therefore, to identify a set of biblical books such that (i) each book is sufficiently long (say, at least 20 chapters), (ii) each is written by one primary author, and (iii) the authors are distinct. Since we wish to use these books as a gold standard, it is important that there be a broad consensus regarding the latter two, potentially controversial, criteria. Our choice is thus limited to the following five books that belong to two biblical sub-genres: Isaiah, Jeremiah, Ezekiel (prophetic literature), Job and Proverbs (wisdom literature). (Due to controversies regarding authorship (Pope 1952, 1965), we include only Chapters 1-33 of Isaiah and only Chapters 3-41 of Job.) Recall that our experiment is as follows: For each pair of books, we are given all the chapters in the union of the two books and are given no information regarding labels. The object is to sort out the chapters belonging to the respective two books. (The fact that there are precisely two constituent books is given.) We will use the three algorithms seen above: 1. generic biblical words representation and ncut clustering; 2. synonym representation and ncut clustering; 3. our two-stage algorithm. We display the results in two separate figures. In Figure 1, we see results for the six pairs of books that belong to different sub-genres. In Figure 2, we see results for the four pairs of books that are in the same genre. (For completeness, we include Jeremiah-Ezekiel, although it served above as a development corpus.) All results are normalized majority diagonal. Figure 1. Results of three clustering methods for different-genre pairs Figure 2. Results of three clustering methods for samegenre pairs As is evident, for different-genre pairs, even the simplest method works quite well, though not as well as the two-stage method, which is perfect for five of six such pairs. The real advantage of the two-stage method is for same-genre pairs. For 1361 these the simple method is quite erratic, while the two-stage method is near perfect. We note that the synonym method without the second stage is slightly worse than generic words for differentgenre pairs (probably because these pairs share relatively few synsets) but is much more consistent for same-genre pairs, giving results in the area of 90% for each such pair. The second stage reduces the errors considerably over the synonym method for both same-genre and different-genre pairs. 7 Decomposing Unsegmented Documents Up to now, we have considered the case where we are given text that has been pre-segmented into pure authorial units. This does not capture the kind of decomposition problems we face in real life. For example, in the Pentateuch problem, the text is divided up according to chapter, but there is no indication that the chapter breaks are correlated with crossovers between authorial units. Thus, we wish now to generalize our two-stage method to handle unsegmented text. 7.1 Generating Composite Documents To make the problem precise, let’s consider how we might create the kind of document that we wish to decompose. For concreteness, let’s think about Jeremiah and Ezekiel. We create a composite document, called Jer-iel, as follows: 1. Choose the first k1 available verses of Jeremiah, where k1 is a random integer drawn from the uniform distribution over the integers 1 to m. 2. Choose the first k2 available verses of Ezekiel, where k2 is a new random integer drawn from the above distribution. 3. Repeat until one of the books is exhausted; then choose the remaining verses of the other book. For the experiments discussed below, we use m=100 (though further experiments, omitted for lack of space, show that results shown are essentially unchanged for any m≥60). Furthermore, to simulate the Pentateuch problem, we break Jer-iel into initial units by beginning a new unit whenever we reach the first verse of one of the original chapters of Jeremiah or Ezekiel. (This does not leak any information since there is no inherent connection between these verses and actual crossover points.) 7.2 Applying the Two-Stage Method Our method works as follows. First, we refine the initial units (each of which might be a mix of verses from Jeremiah and Ezekiel) by splitting them into smaller units that we hope will be pure (wholly from Jeremiah or from Ezekiel). We say that a synset is doubly-represented in a unit if the unit includes two different synonyms of that synset. Doubly-represented synsets are an indication that the unit might include verses from two different books. Our object is thus to split the unit in a way that minimizes doubly-represented synonyms. Formally, let M(x) represent the number of synsets for which more than one synonym appear in x. Call 〈x1,x2〉 a split of x if x=x1x2. A split 〈x1',x2'〉 is optimal if 〈x1',x2'〉= argmax M(x)-max(M(x1),M(x2)) where the maximum is taken over all splits of x. If for an initial unit, there is some split for which M(x)max(M(x1),M(x2)) is greater than 0, we split the unit optimally; if there is more than one optimal split, we choose the one closest to the middle verse of the unit. (In principle, we could apply this procedure iteratively; in the experiments reported here, we split only the initial units but not split units.) Next, we run the first six steps of the two-stage method on the units of Jer-iel obtained from the splitting process, as described above, until the point where the SVM classifier has been learned. Now, instead of classifying chapters as in Step 7 of the algorithm, we classify individual verses. The problem with classifying individual verses is that verses are short and may contain few or no relevant features. In order to remedy this, and also to take advantage of the stickiness of classes across consecutive verses (if a given verse is from a certain book, there is a good chance that the next verse is from the same book), we use two smoothing tactics. Initially, each verse is assigned a raw score by the SVM classifier, representing its signed distance from the SVM boundary. We smooth these scores by computing for each verse a refined score that is a weighted average of the verse’s raw score and the raw scores of the two verses preceding and succeeding it. (In our scheme, the verse itself is given 1.5 times as much weight as its immediate neighbors and three times as much weight as secondary neighbors.) Moreover, if the refined score is less than 1.0 (the width of the SVM margin), we do not initially 1362 assign the verse to either class. Rather, we check the class of the last assigned verse before it and the first assigned verse after it. If these are the same, the verse is assigned to that class (an operation we call “filling the gaps”). If they are not, the verse remains unassigned. To illustrate on the case of Jer-iel, our original “munged” book has 96 units. After pre-splitting, we have 143 units. Of these, 105 are pure units. Our two cluster cores, include 33 and 39 units, respectively; 27 of the former are pure Jeremiah and 30 of the latter are pure Ezekiel; no pure units are in the “wrong” cluster core. Applying the SVM classifier learned on the cluster cores to individual verses, 992 of the 2637 verses in Jer-iel lie outside the SVM margin and are assigned to some class. All but four of these are assigned correctly. Filling the gaps assigns a class to 1186 more verses, all but ten of them correctly. Of the remaining 459 unassigned verses, most lie along transition points (where smoothing tends to flatten scores and where preceding and succeeding assigned verses tend to belong to opposite classes). 7.3 Empirical Results We randomly generated composite books for each of the book pairs considered above. In Figures 3 and 4, we show for each book pair the percentage of all verses in the munged document that are “correctly” classed (that is, in the majority diagonal), the percentage incorrectly classed (minority diagonal) and the percentage not assigned to either class. As is evident, in each case the vast majority of verses are correctly assigned and only a small fraction are incorrectly assigned. That is, we can tease apart the components almost perfectly. Figure 3. Percentage of verses in each munged different-genre pair of books that are correctly and incorrectly assigned or remain unassigned. Figure 4. Percentage of verses in each munged samegenre pair of books that are correctly and incorrectly assigned or remain unassigned. 8 Conclusions and Future Work We have shown that documents can be decomposed into authorial components with very high accuracy by using a two-stage process. First, we establish a reliable partial clustering of units by using synonym choice and then we use these partial clusters as training texts for supervised learning using generic words as features. We have considered only decompositions into two components, although our method generalizes trivially to more than two components, for example by applying it iteratively. The real challenge is to determine the correct number of components, where this information is not given. We leave this for future work. Despite this limitation, our success on munged biblical books suggests that our method can be fruitfully applied to the Pentateuch, since the broad consensus in the field is that the Pentateuch can be divided into two main authorial categories: Priestly (P) and non-Priestly (Driver 1909). (Both categories are often divided further, but these subdivisions are more controversial.) We find that our split corresponds to the expert consensus regarding P and non-P for over 90% of the verses in the Pentateuch for which such consensus exists. We have thus been able to largely recapitulate several centuries of painstaking manual labor with our automated method. We offer those instances in which we disagree with the consensus for the consideration of scholars in the field. In this work, we have exploited the availability of tools for identifying synonyms in biblical literature. In future work, we intend to extend our methods to texts for which such tools are unavailable. 1363 References J. Astruc. 1753. Conjectures sur les mémoires originaux dont il paroit que Moyse s’est servi pour composer le livre de la Genèse. Brussels. R. E. Bee. 1971. Statistical methods in the study of the Masoretic text of the Old Testament. J. of the Royal Statistical Society, 134(1):611-622. M. J. Berryman, A. Allison, and D. Abbott. 2003. Statistical techniques for text classification based on word recurrence intervals. Fluctuation and Noise Letters, 3(1):L1-L10. J. E. Carpenter, G. Hartford-Battersby. 1900. The Hexateuch: According to the Revised Version. London. J. Clark and C. Hannon. 2007. A classifier system for author recognition using synonym-based features. Proc. Sixth Mexican International Conference on Artificial Intelligence, Lecture Notes in Artificial Intelligence, vol. 4827, pp. 839-849. I. S. Dhillon, Y. Guan, and B. Kulis. 2004. Kernel kmeans: spectral clustering and normalized cuts. Proc. ACM International Conference on Knowledge Discovery and Data Mining (KDD), pp. 551-556. S. R. Driver. 1909. An Introduction to the Literature of the Old Testament (8th ed.). Clark, Edinburgh. N. Graham, G. Hirst, and B. Marthi. 2005. Segmenting documents by stylistic character. Natural Language Engineering, 11(4):397-415. D. Guthrie, L. Guthrie, and Y. Wilks. 2008. An unsupervised probabilistic approach for the detection of outliers in corpora. Proc. Sixth International Language Resources and Evaluation (LREC'08), pp. 2830. D. Holmes. 1994. Authorship attribution, Computers and the Humanities, 28(2):87-106. P. Juola. 2008. Author Attribution. Series title: Foundations and Trends in Information Retrieval. Now Publishing, Delft. M. Koppel, N. Akiva, and I. Dagan. 2006. Feature instability as a criterion for selecting potential style markers. J. of the American Society for Information Science and Technology, 57(11):1519-1525. M. Koppel, J. Schler, and S. Argamon. 2009. Computational methods in authorship attribution. J. of the American Society for Information Science and Technology, 60(1):9-26. D. L. Mealand. 1995. Correspondence analysis of Luke. Lit. Linguist Computing, 10(3):171-182. S. Meyer zu Eisen and B. Stein. 2006. Intrinsic plagiarism detection. Proc. European Conference on Information Retrieval (ECIR 2006), Lecture Notes in Computer Science, vol. 3936, pp. 565–569. K. Nigam, A. K. McCallum, S. Thrun, and T. M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM, Machine Learning, 39(2/3):103-134. M. H. Pope. 1965. Job (The Anchor Bible, Vol. XV). Doubleday, New York, NY. M. H. Pope. 1952. Isaiah 34 in relation to Isaiah 35, 4066. Journal of Biblical Literature, 71(4):235-243. Y. Radday. 1970. Isaiah and the computer: A preliminary report, Computers and the Humanities, 5(2):6573. E. Stamatatos. 2009. A survey of modern authorship attribution methods. J. of the American Society for Information Science and Technology, 60(3):538-556. J. Strong. 1890. The Exhaustive Concordance of the Bible. Nashville, TN. (Online edition: http://www.htmlbible.com/sacrednamebiblecom/kjvs trongs/STRINDEX.htm; accessed 14 November 2010.) 1364
2011
136
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1365–1374, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Discovering Sociolinguistic Associations with Structured Sparsity Jacob Eisenstein Noah A. Smith Eric P. Xing School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA {jacobeis,nasmith,epxing}@cs.cmu.edu Abstract We present a method to discover robust and interpretable sociolinguistic associations from raw geotagged text data. Using aggregate demographic statistics about the authors’ geographic communities, we solve a multi-output regression problem between demographics and lexical frequencies. By imposing a composite ℓ1,∞regularizer, we obtain structured sparsity, driving entire rows of coefficients to zero. We perform two regression studies. First, we use term frequencies to predict demographic attributes; our method identifies a compact set of words that are strongly associated with author demographics. Next, we conjoin demographic attributes into features, which we use to predict term frequencies. The composite regularizer identifies a small number of features, which correspond to communities of authors united by shared demographic and linguistic properties. 1 Introduction How is language influenced by the speaker’s sociocultural identity? Quantitative sociolinguistics usually addresses this question through carefully crafted studies that correlate individual demographic attributes and linguistic variables—for example, the interaction between income and the “dropped r” feature of the New York accent (Labov, 1966). But such studies require the knowledge to select the “dropped r” and the speaker’s income, from thousands of other possibilities. In this paper, we present a method to acquire such patterns from raw data. Using multi-output regression with structured sparsity, our method identifies a small subset of lexical items that are most influenced by demographics, and discovers conjunctions of demographic attributes that are especially salient for lexical variation. Sociolinguistic associations are difficult to model, because the space of potentially relevant interactions is large and complex. On the linguistic side there are thousands of possible variables, even if we limit ourselves to unigram lexical features. On the demographic side, the interaction between demographic attributes is often non-linear: for example, gender may negate or amplify class-based language differences (Zhang, 2005). Thus, additive models which assume that each demographic attribute makes a linear contribution are inadequate. In this paper, we explore the large space of potential sociolinguistic associations using structured sparsity. We treat the relationship between language and demographics as a set of multi-input, multioutput regression problems. The regression coefficients are arranged in a matrix, with rows indicating predictors and columns indicating outputs. We apply a composite regularizer that drives entire rows of the coefficient matrix to zero, yielding compact, interpretable models that reuse features across different outputs. If we treat the lexical frequencies as inputs and the author’s demographics as outputs, the induced sparsity pattern reveals the set of lexical items that is most closely tied to demographics. If we treat the demographic attributes as inputs and build a model to predict the text, we can incrementally construct a conjunctive feature space of demographic attributes, capturing key non-linear interactions. 1365 The primary purpose of this research is exploratory data analysis to identify both the most linguistic-salient demographic features, and the most demographically-salient words. However, this model also enables predictions about demographic features by analyzing raw text, potentially supporting applications in targeted information extraction or advertising. On the task of predicting demographics from text, we find that our sparse model yields performance that is statistically indistinguishable from the full vocabulary, even with a reduction in the model complexity an order of magnitude. On the task of predicting text from author demographics, we find that our incrementally constructed feature set obtains significantly better perplexity than a linear model of demographic attributes. 2 Data Our dataset is derived from prior work in which we gathered the text and geographical locations of 9,250 microbloggers on the website twitter. com (Eisenstein et al., 2010). Bloggers were selected from a pool of frequent posters whose messages include metadata indicating a geographical location within a bounding box around the continental United States. We limit the vocabulary to the 5,418 terms which are used by at least 40 authors; no stoplists are applied, as the use of standard or nonstandard orthography for stopwords (e.g., to vs. 2) may convey important information about the author. The dataset includes messages during the first week of March 2010. O’Connor et al. (2010) obtained aggregate demographic statistics for these data by mapping geolocations to publicly-available data from the U. S. Census ZIP Code Tabulation Areas (ZCTA).1 There are 33,178 such areas in the USA (the 9,250 microbloggers in our dataset occupy 3,458 unique ZCTAs), and they are designed to contain roughly equal numbers of inhabitants and demographicallyhomogeneous populations. The demographic attributes that we consider in this paper are shown in Table 1. All attributes are based on self-reports. The race and ethnicity attributes are not mutually exclusive—individuals can indicate any number of races or ethnicities. The “other language” attribute 1http://www.census.gov/support/cen2000. html mean std. dev. race & ethnicity % white 52.1 29.0 % African American 32.2 29.1 % Hispanic 15.7 18.3 language % English speakers 73.7 18.4 % Spanish speakers 14.6 15.6 % other language speakers 11.7 9.2 socioeconomic % urban 95.1 14.3 % with family 64.1 14.4 % renters 48.9 23.4 median income ($) 42,500 18,100 Table 1: The demographic attributes used in this research. aggregates all languages besides English and Spanish. “Urban areas” refer to sets of census tracts or census blocks which contain at least 2,500 residents; our “% urban” attribute is the percentage of individuals in each ZCTA who are listed as living in an urban area. We also consider the percentage of individuals who live with their families, the percentage who live in rented housing, and the median reported income in each ZCTA. While geographical aggregate statistics are frequently used to proxy for individual socioeconomic status in research areas such as public health (e.g., Rushton, 2008), it is clear that interpretation must proceed with caution. Consider an author from a ZIP code in which 60% of the residents are Hispanic:2 we do not know the likelihood that the author is Hispanic, because the set of Twitter users is not a representative sample of the overall population. Polling research suggests that users of both Twitter (Smith and Rainie, 2010) and geolocation services (Zickuhr and Smith, 2010) are much more diverse with respect to age, gender, race and ethnicity than the general population of Internet users. Nonetheless, at present we can only use aggregate statistics to make inferences about the geographic communities in which our authors live, and not the authors themselves. 2In the U.S. Census, the official ethnonym is Hispanic or Latino; for brevity we will use Hispanic in the rest of this paper. 1366 3 Models The selection of both words and demographic features can be framed in terms of multi-output regression with structured sparsity. To select the lexical indicators that best predict demographics, we construct a regression problem in which term frequencies are the predictors and demographic attributes are the outputs; to select the demographic features that predict word use, this arrangement is reversed. Through structured sparsity, we learn models in which entire sets of coefficients are driven to zero; this tells us which words and demographic features can safely be ignored. This section describes the model and implementation for output-regression with structured sparsity; in Section 4 and 5 we give the details of its application to select terms and demographic features. Formally, we consider the linear equation Y = XB+ϵ, where, • Y is the dependent variable matrix, with dimensions N × T, where N is the number of samples and T is the number of output dimensions (or tasks); • X is the independent variable matrix, with dimensions N × P, where P is the number of input dimensions (or predictors); • B is the matrix of regression coefficients, with dimensions P × T; • ϵ is a N × T matrix in which each element is noise from a zero-mean Gaussian distribution. We would like to solve the unconstrained optimization problem, minimizeB ||Y −XB||2 F + λR(B), (1) where ||A||2 F indicates the squared Frobenius norm P i P j a2 ij, and the function R(B) defines a norm on the regression coefficients B. Ridge regression applies the ℓ2 norm R(B) = PT t=1 qPP p b2 pt, and lasso regression applies the ℓ1 norm R(B) = PT t=1 PP p |bpt|; in both cases, it is possible to decompose the multi-output regression problem, treating each output dimension separately. However, our working hypothesis is that there will be substantial correlations across both the vocabulary and the demographic features—for example, a demographic feature such as the percentage of Spanish speakers will predict a large set of words. Our goal is to select a small set of predictors yielding good performance across all output dimensions. Thus, we desire structured sparsity, in which entire rows of the coefficient matrix B are driven to zero. Structured sparsity is not achieved by the lasso’s ℓ1 norm. The lasso gives element-wise sparsity, in which many entries of B are driven to zero, but each predictor may have a non-zero value for some output dimension. To drive entire rows of B to zero, we require a composite regularizer. We consider the ℓ1,∞ norm, which is the sum of ℓ∞norms across output dimensions: R(B) = PT t maxp bpt (Turlach et al., 2005). This norm, which corresponds to a multioutput lasso regression, has the desired property of driving entire rows of B to zero. 3.1 Optimization There are several techniques for solving the ℓ1,∞ normalized regression, including interior point methods (Turlach et al., 2005) and projected gradient (Duchi et al., 2008; Quattoni et al., 2009). We choose the blockwise coordinate descent approach of Liu et al. (2009) because it is easy to implement and efficient: the time complexity of each iteration is independent of the number of samples.3 Due to space limitations, we defer to Liu et al. (2009) for a complete description of the algorithm. However, we note two aspects of our implementation which are important for natural language processing applications. The algorithm’s efficiency is accomplished by precomputing the matrices C = ˜XT ˜Y and D = ˜XT ˜X, where ˜X and ˜Y are the standardized versions of X and Y, obtained by subtracting the mean and scaling by the variance. Explicit mean correction would destroy the sparse term frequency data representation and render us unable to store the data in memory; however, we can achieve the same effect by computing C = XTY −N ¯xT¯y, where ¯x and ¯y are row vectors indicating the means 3Our implementation is available at http://sailing. cs.cmu.edu/sociolinguistic.html. 1367 of X and Y respectively.4 We can similarly compute D = XTX −N ¯xT¯x. If the number of predictors is too large, it may not be possible to store the dense matrix D in memory. We have found that approximation based on the truncated singular value decomposition provides an effective trade-off of time for space. Specifically, we compute XTX ≈ USVT  USVTT = U  SVTVSTUT = UM. Lower truncation levels are less accurate, but are faster and require less space: for K singular values, the storage cost is O(KP), instead of O(P 2); the time cost increases by a factor of K. This approximation was not necessary in the experiments presented here, although we have found that it performs well as long as the regularizer is not too close to zero. 3.2 Regularization The regularization constant λ can be computed using cross-validation. As λ increases, we reuse the previous solution of B for initialization; this “warm start” trick can greatly accelerate the computation of the overall regularization path (Friedman et al., 2010). At each λi, we solve the sparse multi-output regression; the solution Bi defines a sparse set of predictors for all tasks. We then use this limited set of predictors to construct a new input matrix ˆXi, which serves as the input in a standard ridge regression, thus refitting the model. The tuning set performance of this regression is the score for λi. Such post hoc refitting is often used in tandem with the lasso and related sparse methods; the effectiveness of this procedure has been demonstrated in both theory (Wasserman and Roeder, 2009) and practice (Wu et al., 2010). The regularization parameter of the ridge regression is determined by internal cross-validation. 4 Predicting Demographics from Text Sparse multi-output regression can be used to select a subset of vocabulary items that are especially indicative of demographic and geographic differences. 4Assume without loss of generality that X and Y are scaled to have variance 1, because this scaling does not affect the sparsity pattern. Starting from the regression problem (1), the predictors X are set to the term frequencies, with one column for each word type and one row for each author in the dataset. The outputs Y are set to the ten demographic attributes described in Table 1 (we consider much larger demographic feature spaces in the next section) The ℓ1,∞regularizer will drive entire rows of the coefficient matrix B to zero, eliminating all demographic effects for many words. 4.1 Quantitative Evaluation We evaluate the ability of lexical features to predict the demographic attributes of their authors (as proxied by the census data from the author’s geographical area). The purpose of this evaluation is to assess the predictive ability of the compact subset of lexical items identified by the multi-output lasso, as compared with the full vocabulary. In addition, this evaluation establishes a baseline for performance on the demographic prediction task. We perform five-fold cross-validation, using the multi-output lasso to identify a sparse feature set in the training data. We compare against several other dimensionality reduction techniques, matching the number of features obtained by the multioutput lasso at each fold. First, we compare against a truncated singular value decomposition, with the truncation level set to the number of terms selected by the multi-output lasso; this is similar in spirit to vector-based lexical semantic techniques (Sch¨utze and Pedersen, 1993). We also compare against simply selecting the N most frequent terms, and the N terms with the greatest variance in frequency across authors. Finally, we compare against the complete set of all 5,418 terms. As before, we perform post hoc refitting on the training data using a standard ridge regression. The regularization constant for the ridge regression is identified using nested five-fold cross validation within the training set. We evaluate on the refit models on the heldout test folds. The scoring metric is Pearson’s correlation coefficient between the predicted and true demographics: ρ(y, ˆy) = cov(y,ˆy) σyσˆy , with cov(y, ˆy) indicating the covariance and σy indicating the standard deviation. On this metric, a perfect predictor will score 1 and a random predictor will score 0. We report the average correlation across all ten demo1368 10 2 10 3 0.16 0.18 0.2 0.22 0.24 0.26 0.28 number of features average correlation multi−output lasso SVD highest variance most frequent Figure 1: Average correlation plotted against the number of active features (on a logarithmic scale). graphic attributes, as well as the individual correlations. Results Table 2 shows the correlations obtained by regressions performed on a range of different vocabularies, averaged across all five folds. Linguistic features are best at predicting race, ethnicity, language, and the proportion of renters; the other demographic attributes are more difficult to predict. Among feature sets, the highest average correlation is obtained by the full vocabulary, but the multioutput lasso obtains nearly identical performance using a feature set that is an order of magnitude smaller. Applying the Fischer transformation, we find that all correlations are statistically significant at p < .001. The Fischer transformation can also be used to estimate 95% confidence intervals around the correlations. The extent of the confidence intervals varies slightly across attributes, but all are tighter than ±0.02. We find that the multi-output lasso and the full vocabulary regression are not significantly different on any of the attributes. Thus, the multioutput lasso achieves a 93% compression of the feature set without a significant decrease in predictive performance. The multi-output lasso yields higher correlations than the other dimensionality reduction techniques on all of the attributes; these differences are statistically significant in many—but not all— cases. The correlations for each attribute are clearly not independent, so we do not compare the average across attributes. Recall that the regularization coefficient was chosen by nested cross-validation within the training set; the average number of features selected is 394.6. Figure 1 shows the performance of each dimensionality-reduction technique across the regularization path for the first of five cross-validation folds. Computing the truncated SVD of a sparse matrix at very large truncation levels is computationally expensive, so we cannot draw the complete performance curve for this method. The multi-output lasso dominates the alternatives, obtaining a particularly strong advantage with very small feature sets. This demonstrates its utility for identifying interpretable models which permit qualitative analysis. 4.2 Qualitative Analysis For a qualitative analysis, we retrain the model on the full dataset, and tune the regularization to identify a compact set of 69 features. For each identified term, we apply a significance test on the relationship between the presence of each term and the demographic indicators shown in the columns of the table. Specifically, we apply the Wald test for comparing the means of independent samples, while making the Bonferroni correction for multiple comparisons (Wasserman, 2003). The use of sparse multioutput regression for variable selection increases the power of post hoc significance testing, because the Bonferroni correction bases the threshold for statistical significance on the total number of comparisons. We find 275 associations at the p < .05 level; at the higher threshold required by a Bonferroni correction for comparisons among all terms in the vocabulary, 69 of these associations would have been missed. Table 3 shows the terms identified by our model which have a significant correlation with at least one of the demographic indicators. We divide words in the list into categories, which order alphabetically by the first word in each category: emoticons; standard English, defined as words with Wordnet entries; proper names; abbreviations; non-English words; non-standard words used with English. The categorization was based on the most frequent sense in an informal analysis of our data. A glossary of nonstandard terms is given in Table 4. Some patterns emerge from Table 3. Standard English words tend to appear in areas with more 1369 vocabulary # features average white Afr. Am. Hisp. Eng. lang. Span. lang. other lang. urban family renter med. inc. full 5418 0.260 0.337 0.318 0.296 0.384 0.296 0.256 0.155 0.113 0.295 0.152 multi-output lasso 394.6 0.260 0.326 0.308 0.304 0.383 0.303 0.249 0.153 0.113 0.302 0.156 SVD 0.237 0.321 0.299 0.269 0.352 0.272 0.226 0.138 0.081 0.278 0.136 highest variance 0.220 0.309 0.287 0.245 0.315 0.248 0.199 0.132 0.085 0.250 0.135 most frequent 0.204 0.294 0.264 0.222 0.293 0.229 0.178 0.129 0.073 0.228 0.126 Table 2: Correlations between predicted and observed demographic attributes, averaged across cross validation folds. English speakers; predictably, Spanish words tend to appear in areas with Spanish speakers and Hispanics. Emoticons tend to be used in areas with many Hispanics and few African Americans. Abbreviations (e.g., lmaoo) have a nearly uniform demographic profile, displaying negative correlations with whites and English speakers, and positive correlations with African Americans, Hispanics, renters, Spanish speakers, and areas classified as urban. Many non-standard English words (e.g., dats) appear in areas with high proportions of renters, African Americans, and non-English speakers, though a subset (haha, hahaha, and yep) display the opposite demographic pattern. Many of these non-standard words are phonetic transcriptions of standard words or phrases: that’s→dats, what’s up→wassup, I’m going to→ima. The relationship between these transcriptions and the phonological characteristics of dialects such as African-American Vernacular English is a topic for future work. 5 Conjunctive Demographic Features Next, we demonstrate how to select conjunctions of demographic features that predict text. Again, we apply multi-output regression, but now we reverse the direction of inference: the predictors are demographic features, and the outputs are term frequencies. The sparsity-inducing ℓ1,∞norm will select a subset of demographic features that explain the term frequencies. We create an initial feature set f(0)(X) by binning each demographic attribute, using five equalfrequency bins. We then constructive conjunctive features by applying a procedure inspired by related work in computational biology, called “Screen and Clean” (Wu et al., 2010). On iteration i: • Solve the sparse multi-output regression problem Y = f(i)(X)B(i) + ϵ. • Select a subset of features S(i) such that m ∈ S(i) iff maxj |b(i) m,j| > 0. These are the row indices of the predictors with non-zero coefficients. • Create a new feature set f(i+1)(X), including the conjunction of each feature (and its negation) in S(i) with each feature in the initial set f(0)(X). We iterate this process to create features that conjoin as many as three attributes. In addition to the binned versions of the demographic attributes described in Table 1, we include geographical information. We built Gaussian mixture models over the locations, with 3, 5, 8, 12, 17, and 23 components. For each author we include the most likely cluster assignment in each of the six mixture models. For efficiency, the outputs Y are not set to the raw term frequencies; instead we compute a truncated singular value decomposition of the term frequencies W ≈UVDT, and use the basis U. We set the truncation level to 100. 5.1 Quantitative Evaluation The ability of the induced demographic features to predict text is evaluated using a traditional perplexity metric. The same test and training split is used from the vocabulary experiments. We construct a language model from the induced demographic features by training a multi-output ridge regression, which gives a matrix ˆB that maps from demographic features to term frequencies across the entire vocabulary. For each document in the test set, the “raw” predicted language model is ˆyd = f(xd)B, which is then normalized. The probability mass assigned 1370 white Afr. Am. Hisp. Eng. lang. Span. lang. other lang. urban family renter med. inc. - + + + + ;) + + :( :) :d + + + as + awesome + + break + campus + dead + + + + hell + shit + train + + will + would + atlanta + famu + + harlem + bbm + + + + lls + + lmaoo + + + + + + lmaooo + + + + + + lmaoooo + + + + + lmfaoo + + + + lmfaooo + + + + lml + + + + + + odee + + + + omw + + + + + + smfh + + + + + + smh + + + w| + + + + + con + + + la + + si + + dats + + deadass + + + + + + haha + hahah + hahaha + + ima + + + madd + + nah + + + + ova + + sis + + skool + + + + wassup + + + + + + wat + + + + + + ya + + yall + yep + yoo + + + + + + yooo + + + Table 3: Demographically-indicative terms discovered by multi-output sparse regression. Statistically significant (p < .05) associations are marked with a + or -. term definition bbm Blackberry Messenger dats that’s dead(ass) very famu Florida Agricultural and Mechanical Univ. ima I’m going to lls laughing like shit lm(f)ao+ laughing my (fucking) ass off lml love my life madd very, lots nah no odee very term definition omw on my way ova over sis sister skool school sm(f)h shake my (fucking) head w| with wassup what’s up wat what ya your, you yall you plural yep yes yoo+ you Table 4: A glossary of non-standard terms from Table 3. Definitions are obtained by manually inspecting the context in which the terms appear, and by consulting www.urbandictionary.com. model perplexity induced demographic features 333.9 raw demographic attributes 335.4 baseline (no demographics) 337.1 Table 5: Word perplexity on test documents, using language models estimated from induced demographic features, raw demographic attributes, and a relativefrequency baseline. Lower scores are better. to unseen words is determined through nested crossvalidation. We compare against a baseline language model obtained from the training set, again using nested cross-validation to set the probability of unseen terms. Results are shown in Table 5. The language models induced from demographic data yield small but statistically significant improvements over the baseline (Wilcoxon signed-rank test, p < .001). Moreover, the model based on conjunctive features significantly outperforms the model constructed from raw attributes (p < .001). 5.2 Features Discovered Our approach discovers 37 conjunctive features, yielding the results shown in Table 5. We sort all features by frequency, and manually select a subset to display in Table 6. Alongside each feature, we show the words with the highest and lowest logodds ratios with respect to the feature. Many of these terms are non-standard; while space does not permit a complete glossary, some are defined in Table 4 or in our earlier work (Eisenstein et al., 2010). 1371 feature positive terms negative terms 1 geo: Northeast m2 brib mangoville soho odeee fasho #ilovefamu foo coo fina 2 geo: NYC mangoville lolss m2 brib wordd bahaha fasho goofy #ilovefamu tacos 4 geo: South+Midwest renter ≤0.615 white ≤0.823 hme muthafucka bae charlotte tx odeee m2 lolss diner mangoville 7 Afr. Am. > 0.101 renter > 0.615 Span. lang. > 0.063 dhat brib odeee lolss wassupp bahaha charlotte california ikr enter 8 Afr. Am. ≤0.207 Hispanic > 0.119 Span. lang. > 0.063 les ahah para san donde bmore ohio #lowkey #twitterjail nahhh 9 geo: NYC Span. lang. ≤0.213 mangoville thatt odeee lolss buzzin landed rodney jawn wiz golf 12 Afr. Am. > 0.442 geo: South+Midwest white ≤0.823 #ilovefamu panama midterms willies #lowkey knoe esta pero odeee hii 15 geo: West Coast other lang. > 0.110 ahah fasho san koo diego granted pride adore phat pressure 17 Afr. Am. > 0.442 geo: NYC other lang. ≤0.110 lolss iim buzzin qonna qood foo tender celebs pages pandora 20 Afr. Am. ≤0.207 Span. lang. > 0.063 white > 0.823 del bby cuando estoy muscle knicks becoming uncomfortable large granted 23 Afr. Am. ≤0.050 geo: West Span. lang. ≤0.106 leno it’d 15th hacked government knicks liquor uu hunn homee 33 Afr. Am. > 0.101 geo: SF Bay Span. lang. > 0.063 hella aha california bay o.o aj everywhere phones shift regardless 36 Afr. Am. ≤0.050 geo: DC/Philadelphia Span. lang. ≤0.106 deh opens stuffed yaa bmore hmmmmm dyin tea cousin hella Table 6: Conjunctive features discovered by our method with a strong sparsity-inducing prior, ordered by frequency. We also show the words with high log-odds for each feature (postive terms) and its negation (negative terms). In general, geography was a strong predictor, appearing in 25 of the 37 conjunctions. Features 1 and 2 (F1 and F2) are purely geographical, capturing the northeastern United States and the New York City area. The geographical area of F2 is completely contained by F1; the associated terms are thus very similar, but by having both features, the model can distinguish terms which are used in northeastern areas outside New York City, as well as terms which are especially likely in New York.5 Several features conjoin geography with demographic attributes. For example, F9 further refines the New York City area by focusing on communities that have relatively low numbers of Spanish speakers; F17 emphasizes New York neighborhoods that have very high numbers of African Americans and few speakers of languages other than English and Spanish. The regression model can use these features in combination to make fine-grained distinctions about the differences between such neighborhoods. Outside New York, we see that F4 combines a broad geographic area with attributes that select at least moderate levels of minorities and fewer renters (a proxy for areas that are less urban), while F15 identifies West Coast communities with large num5Mangoville and M2 are clubs in New York; fasho and coo were previously found to be strongly associated with the West Coast (Eisenstein et al., 2010). bers of speakers of languages other than English and Spanish. Race and ethnicity appear in 28 of the 37 conjunctions. The attribute indicating the proportion of African Americans appeared in 22 of these features, strongly suggesting that African American Vernacular English (Rickford, 1999) plays an important role in social media text. Many of these features conjoined the proportion of African Americans with geographical features, identifying local linguistic styles used predominantly in either African American or white communities. Among features which focus on minority communities, F17 emphasizes the New York area, F33 focuses on the San Francisco Bay area, and F12 selects a broad area in the Midwest and South. Conversely, F23 selects areas with very few African Americans and Spanish-speakers in the western part of the United States, and F36 selects for similar demographics in the area of Washington and Philadelphia. Other features conjoined the proportion of African Americans with the proportion of Hispanics and/or Spanish speakers. In some cases, features selected for high proportions of both African Americans and Hispanics; for example, F7 seems to identify a general “urban minority” group, emphasizing renters, African Americans, and Spanish speakers. Other features differentiate between African Ameri1372 cans and Hispanics: F8 identifies regions with many Spanish speakers and Hispanics, but few African Americans; F20 identifies regions with both Spanish speakers and whites, but few African Americans. F8 and F20 tend to emphasize more Spanish words than features which select for both African Americans and Hispanics. While race, geography, and language predominate, the socioeconomic attributes appear in far fewer features. The most prevalent attribute is the proportion of renters, which appears in F4 and F7, and in three other features not shown here. This attribute may be a better indicator of the urban/rural divide than the “% urban” attribute, which has a very low threshold for what counts as urban (see Table 1). It may also be a better proxy for wealth than median income, which appears in only one of the thirty-seven selected features. Overall, the selected features tend to include attributes that are easy to predict from text (compare with Table 2). 6 Related Work Sociolinguistics has a long tradition of quantitative and computational research. Logistic regression has been used to identify relationships between demographic features and linguistic variables since the 1970s (Cedergren and Sankoff, 1974). More recent developments include the use of mixed factor models to account for idiosyncrasies of individual speakers (Johnson, 2009), as well as clustering and multidimensional scaling (Nerbonne, 2009) to enable aggregate inference across multiple linguistic variables. However, all of these approaches assume that both the linguistic indicators and demographic attributes have already been identified by the researcher. In contrast, our approach focuses on identifying these indicators automatically from data. We view our approach as an exploratory complement to more traditional analysis. There is relatively little computational work on identifying speaker demographics. Chang et al. (2010) use U.S. Census statistics about the ethnic distribution of last names as an anchor in a latentvariable model that infers the ethnicity of Facebook users; however, their paper analyzes social behavior rather than language use. In unpublished work, David Bamman uses geotagged Twitter text and U.S. Census statistics to estimate the age, gender, and racial distributions of various lexical items.6 Eisenstein et al. (2010) infer geographic clusters that are coherent with respect to both location and lexical distributions; follow-up work by O’Connor et al. (2010) applies a similar generative model to demographic data. The model presented here differs in two key ways: first, we use sparsity-inducing regularization to perform variable selection; second, we eschew high-dimensional mixture models in favor of a bottom-up approach of building conjunctions of demographic and geographic attributes. In a mixture model, each component must define a distribution over all demographic variables, which may be difficult to estimate in a high-dimensional setting. Early examples of the use of sparsity in natural language processing include maximum entropy classification (Kazama and Tsujii, 2003), language modeling (Goodman, 2004), and incremental parsing (Riezler and Vasserman, 2004). These papers all apply the standard lasso, obtaining sparsity for a single output dimension. Structured sparsity has rarely been applied to language tasks, but Duh et al. (2010) reformulated the problem of reranking N-best lists as multi-task learning with structured sparsity. 7 Conclusion This paper demonstrates how regression with structured sparsity can be applied to select words and conjunctive demographic features that reveal sociolinguistic associations. The resulting models are compact and interpretable, with little cost in accuracy. In the future we hope to consider richer linguistic models capable of identifying multi-word expressions and syntactic variation. Acknowledgments We received helpful feedback from Moira Burke, Scott Kiesling, Seyoung Kim, Andr´e Martins, Kriti Puniyani, and the anonymous reviewers. Brendan O’Connor provided the data for this research, and Seunghak Lee shared a Matlab implementation of the multi-output lasso, which was the basis for our C implementation. This research was enabled by AFOSR FA9550010247, ONR N0001140910758, NSF CAREER DBI-0546594, NSF CAREER IIS-1054319, NSF IIS0713379, an Alfred P. Sloan Fellowship, and Google’s support of the Worldly Knowledge project at CMU. 6http://www.lexicalist.com 1373 References Henrietta J. Cedergren and David Sankoff. 1974. Variable rules: Performance as a statistical reflection of competence. Language, 50(2):333–355. Jonathan Chang, Itamar Rosenn, Lars Backstrom, and Cameron Marlow. 2010. ePluribus: Ethnicity on social networks. In Proceedings of ICWSM. John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. 2008. Efficient projections onto the ℓ1-ball for learning in high dimensions. In Proceedings of ICML. Kevin Duh, Katsuhito Sudoh, Hajime Tsukada, Hideki Isozaki, and Masaaki Nagata. 2010. n-best reranking by multitask learning. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and Metrics. Jacob Eisenstein, Brendan O’Connor, Noah A. Smith, and Eric P. Xing. 2010. A latent variable model of geographic lexical variation. In Proceedings of EMNLP. Jerome Friedman, Trevor Hastie, and Rob Tibshirani. 2010. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22. Joshua Goodman. 2004. Exponential priors for maximum entropy models. In Proceedings of NAACL-HLT. Daniel E. Johnson. 2009. Getting off the GoldVarb standard: Introducing Rbrul for mixed-effects variable rule analysis. Language and Linguistics Compass, 3(1):359–383. Jun’ichi Kazama and Jun’ichi Tsujii. 2003. Evaluation and extension of maximum entropy models with inequality constraints. In Proceedings of EMNLP. William Labov. 1966. The Social Stratification of English in New York City. Center for Applied Linguistics. Han Liu, Mark Palatucci, and Jian Zhang. 2009. Blockwise coordinate descent procedures for the multi-task lasso, with applications to neural semantic basis discovery. In Proceedings of ICML. John Nerbonne. 2009. Data-driven dialectology. Language and Linguistics Compass, 3(1):175–198. Brendan O’Connor, Jacob Eisenstein, Eric P. Xing, and Noah A. Smith. 2010. A mixture model of demographic lexical variation. In Proceedings of NIPS Workshop on Machine Learning in Computational Social Science. Ariadna Quattoni, Xavier Carreras, Michael Collins, and Trevor Darrell. 2009. An efficient projection for ℓ1,∞ regularization. In Proceedings of ICML. John R. Rickford. 1999. African American Vernacular English. Blackwell. Stefan Riezler and Alexander Vasserman. 2004. Incremental feature selection and ℓ1 regularization for relaxed maximum-entropy modeling. In Proceedings of EMNLP. Gerard Rushton, Marc P. Armstrong, Josephine Gittler, Barry R. Greene, Claire E. Pavlik, Michele M. West, and Dale L. Zimmerman, editors. 2008. Geocoding Health Data: The Use of Geographic Codes in Cancer Prevention and Control, Research, and Practice. CRC Press. Hinrich Sch¨utze and Jan Pedersen. 1993. A vector model for syntagmatic and paradigmatic relatedness. In Proceedings of the 9th Annual Conference of the UW Centre for the New OED and Text Research. Aaron Smith and Lee Rainie. 2010. Who tweets? Technical report, Pew Research Center, December. Berwin A. Turlach, William N. Venables, and Stephen J. Wright. 2005. Simultaneous variable selection. Technometrics, 47(3):349–363. Larry Wasserman and Kathryn Roeder. 2009. Highdimensional variable selection. Annals of Statistics, 37(5A):2178–2201. Larry Wasserman. 2003. All of Statistics: A Concise Course in Statistical Inference. Springer. Jing Wu, Bernie Devlin, Steven Ringquist, Massimo Trucco, and Kathryn Roeder. 2010. Screen and clean: A tool for identifying interactions in genome-wide association studies. Genetic Epidemiology, 34(3):275– 285. Qing Zhang. 2005. A Chinese yuppie in Beijing: Phonological variation and the construction of a new professional identity. Language in Society, 34:431–466. Kathryn Zickuhr and Aaron Smith. 2010. 4% of online Americans use location-based services. Technical report, Pew Research Center, November. 1374
2011
137
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1375–1384, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Local and Global Algorithms for Disambiguation to Wikipedia Lev Ratinov 1 Dan Roth1 Doug Downey2 Mike Anderson3 1University of Illinois at Urbana-Champaign {ratinov2|danr}@uiuc.edu 2Northwestern University [email protected] 3Rexonomy [email protected] Abstract Disambiguating concepts and entities in a context sensitive way is a fundamental problem in natural language processing. The comprehensiveness of Wikipedia has made the online encyclopedia an increasingly popular target for disambiguation. Disambiguation to Wikipedia is similar to a traditional Word Sense Disambiguation task, but distinct in that the Wikipedia link structure provides additional information about which disambiguations are compatible. In this work we analyze approaches that utilize this information to arrive at coherent sets of disambiguations for a given document (which we call “global” approaches), and compare them to more traditional (local) approaches. We show that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat. 1 Introduction Wikification is the task of identifying and linking expressions in text to their referent Wikipedia pages. Recently, Wikification has been shown to form a valuable component for numerous natural language processing tasks including text classification (Gabrilovich and Markovitch, 2007b; Chang et al., 2008), measuring semantic similarity between texts (Gabrilovich and Markovitch, 2007a), crossdocument co-reference resolution (Finin et al., 2009; Mayfield et al., 2009), and other tasks (Kulkarni et al., 2009). Previous studies on Wikification differ with respect to the corpora they address and the subset of expressions they attempt to link. For example, some studies focus on linking only named entities, whereas others attempt to link all “interesting” expressions, mimicking the link structure found in Wikipedia. Regardless, all Wikification systems are faced with a key Disambiguation to Wikipedia (D2W) task. In the D2W task, we’re given a text along with explicitly identified substrings (called mentions) to disambiguate, and the goal is to output the corresponding Wikipedia page, if any, for each mention. For example, given the input sentence “I am visiting friends in <Chicago>,” we output http://en.wikipedia.org/wiki/Chicago – the Wikipedia page for the city of Chicago, Illinois, and not (for example) the page for the 2002 film of the same name. Local D2W approaches disambiguate each mention in a document separately, utilizing clues such as the textual similarity between the document and each candidate disambiguation’s Wikipedia page. Recent work on D2W has tended to focus on more sophisticated global approaches to the problem, in which all mentions in a document are disambiguated simultaneously to arrive at a coherent set of disambiguations (Cucerzan, 2007; Milne and Witten, 2008b; Han and Zhao, 2009). For example, if a mention of “Michael Jordan” refers to the computer scientist rather than the basketball player, then we would expect a mention of “Monte Carlo” in the same document to refer to the statistical technique rather than the location. Global approaches utilize the Wikipedia link graph to estimate coherence. 1375 m1 = Taiwan m2 = China m3 = Jiangsu Province .............. t1 = Taiwan t5 =People's Republic of China t7 = Jiangsu .............. Document text with mentions t2 = Chinese Taipei t3 =Republic of China t4 = China t6 = History of China φ(m1, t1) φ(m1, t2) φ(m1, t3) ψ(t1, t7) ψ(t3, t7) ψ(t5, t7) Figure 1: Sample Disambiguation to Wikipedia problem with three mentions. The mention “Jiangsu” is unambiguous. The correct mapping from mentions to titles is marked by heavy edges In this paper, we analyze global and local approaches to the D2W task. Our contributions are as follows: (1) We present a formulation of the D2W task as an optimization problem with local and global variants, and identify the strengths and the weaknesses of each, (2) Using this formulation, we present a new global D2W system, called GLOW. In experiments on existing and novel D2W data sets,1 GLOW is shown to outperform the previous stateof-the-art system of (Milne and Witten, 2008b), (3) We present an error analysis and identify the key remaining challenge: determining when mentions refer to concepts not captured in Wikipedia. 2 Problem Definition and Approach We formalize our Disambiguation to Wikipedia (D2W) task as follows. We are given a document d with a set of mentions M = {m1, . . . , mN}, and our goal is to produce a mapping from the set of mentions to the set of Wikipedia titles W = {t1, . . . , t|W |}. Often, mentions correspond to a concept without a Wikipedia page; we treat this case by adding a special null title to the set W. The D2W task can be visualized as finding a many-to-one matching on a bipartite graph, with mentions forming one partition and Wikipedia titles the other (see Figure 1). We denote the output matching as an N-tuple Γ = (t1, . . . , tN) where ti is the output disambiguation for mention mi. 1The data sets are available for download at http://cogcomp.cs.illinois.edu/Data 2.1 Local and Global Disambiguation A local D2W approach disambiguates each mention mi separately. Specifically, let φ(mi, tj) be a score function reflecting the likelihood that the candidate title tj ∈W is the correct disambiguation for mi ∈M. A local approach solves the following optimization problem: Γ∗ local = arg max Γ N X i=1 φ(mi, ti) (1) Local D2W approaches, exemplified by (Bunescu and Pasca, 2006) and (Mihalcea and Csomai, 2007), utilize φ functions that assign higher scores to titles with content similar to that of the input document. We expect, all else being equal, that the correct disambiguations will form a “coherent” set of related concepts. Global approaches define a coherence function ψ, and attempt to solve the following disambiguation problem: Γ∗= arg max Γ [ N X i=1 φ(mi, ti) + ψ(Γ)] (2) The global optimization problem in Eq. 2 is NPhard, and approximations are required (Cucerzan, 2007). The common approach is to utilize the Wikipedia link graph to obtain an estimate pairwise relatedness between titles ψ(ti, tj) and to efficiently generate a disambiguation context Γ′, a rough approximation to the optimal Γ∗. We then solve the easier problem: Γ∗≈arg max Γ N X i=1 [φ(mi, ti) + X tj∈Γ′ ψ(ti, tj)] (3) 1376 Eq. 3 can be solved by finding each ti and then mapping mi independently as in a local approach, but still enforces some degree of coherence among the disambiguations. 3 Related Work Wikipedia was first explored as an information source for named entity disambiguation and information retrieval by Bunescu and Pasca (2006). There, disambiguation is performed using an SVM kernel that compares the lexical context around the ambiguous named entity to the content of the candidate disambiguation’s Wikipedia page. However, since each ambiguous mention required a separate SVM model, the experiment was on a very limited scale. Mihalcea and Csomai applied Word Sense Disambiguation methods to the Disambiguation to Wikipedia task (2007). They experimented with two methods: (a) the lexical overlap between the Wikipedia page of the candidate disambiguations and the context of the ambiguous mention, and (b) training a Naive Bayes classiffier for each ambiguous mention, using the hyperlink information found in Wikipedia as ground truth. Both (Bunescu and Pasca, 2006) and (Mihalcea and Csomai, 2007) fall into the local framework. Subsequent work on Wikification has stressed that assigned disambiguations for the same document should be related, introducing the global approach (Cucerzan, 2007; Milne and Witten, 2008b; Han and Zhao, 2009; Ferragina and Scaiella, 2010). The two critical components of a global approach are the semantic relatedness function ψ between two titles, and the disambiguation context Γ′. In (Milne and Witten, 2008b), the semantic context is defined to be a set of “unambiguous surface forms” in the text, and the title relatedness ψ is computed as Normalized Google Distance (NGD) (Cilibrasi and Vitanyi, 2007).2 On the other hand, in (Cucerzan, 2007) the disambiguation context is taken to be all plausible disambiguations of the named entities in the text, and title relatedness is based on the overlap in categories and incoming links. Both approaches have limitations. The first approach relies on the pres2(Milne and Witten, 2008b) also weight each mention in Γ′ by its estimated disambiguation utility, which can be modeled by augmenting ψ on per-problem basis. ence of unambiguous mentions in the input document, and the second approach inevitably adds irrelevant titles to the disambiguation context. As we demonstrate in our experiments, by utilizing a more accurate disambiguation context, GLOW is able to achieve better performance. 4 System Architecture In this section, we present our global D2W system, which solves the optimization problem in Eq. 3. We refer to the system as GLOW, for Global Wikification. We use GLOW as a test bed for evaluating local and global approaches for D2W. GLOW combines a powerful local model φ with an novel method for choosing an accurate disambiguation context Γ′, which as we show in our experiments allows it to outperform the previous state of the art. We represent the functions φ and ψ as weighted sums of features. Specifically, we set: φ(m, t) = X i wiφi(m, t) (4) where each feature φi(m, t) captures some aspect of the relatedness between the mention m and the Wikipedia title t. Feature functions ψi(t, t′) are defined analogously. We detail the specific feature functions utilized in GLOW in following sections. The coefficients wi are learned using a Support Vector Machine over bootstrapped training data from Wikipedia, as described in Section 4.5. At a high level, the GLOW system optimizes the objective function in Eq. 3 in a two-stage process. We first execute a ranker to obtain the best non-null disambiguation for each mention in the document, and then execute a linker that decides whether the mention should be linked to Wikipedia, or whether instead switching the top-ranked disambiguation to null improves the objective function. As our experiments illustrate, the linking task is the more challenging of the two by a significant margin. Figure 2 provides detailed pseudocode for GLOW. Given a document d and a set of mentions M, we start by augmenting the set of mentions with all phrases in the document that could be linked to Wikipedia, but were not included in M. Introducing these additional mentions provides context that may be informative for the global coherence computation (it has no effect on local approaches). In the second 1377 Algorithm: Disambiguate to Wikipedia Input: document d, Mentions M = {m1, . . . , mN} Output: a disambiguation Γ = (t1, . . . , tN). 1) Let M ′ = M∪{ Other potential mentions in d} 2) For each mention m′ i ∈M ′, construct a set of disambiguation candidates Ti = {ti 1, . . . , ti ki}, ti j ̸= null 3) Ranker: Find a solution Γ = (t′ 1, . . . , t′ |M′|), where t′ i ∈Ti is the best non-null disambiguation of m′ i. 4) Linker: For each m′ i, map t′ i to null in Γ iff doing so improves the objective function 5) Return Γ entries for the original mentions M. Figure 2: High-level pseudocode for GLOW. step, we construct for each mention mi a limited set of candidate Wikipedia titles Ti that mi may refer to. Considering only a small subset of Wikipedia titles as potential disambiguations is crucial for tractability (we detail which titles are selected below). In the third step, the ranker outputs the most appropriate non-null disambiguation ti for each mention mi. In the final step, the linker decides whether the top-ranked disambiguation is correct. The disambiguation (mi, ti) may be incorrect for several reasons: (1) mention mi does not have a corresponding Wikipedia page, (2) mi does have a corresponding Wikipedia page, but it was not included in Ti, or (3) the ranker erroneously chose an incorrect disambiguation over the correct one. In the below sections, we describe each step of the GLOW algorithm, and the local and global features utilized, in detail. Because we desire a system that can process documents at scale, each step requires trade-offs between accuracy and efficiency. 4.1 Disambiguation Candidates Generation The first step in GLOW is to extract all mentions that can refer to Wikipedia titles, and to construct a set of disambiguation candidates for each mention. Following previous work, we use Wikipedia hyperlinks to perform these steps. GLOW utilizes an anchortitle index, computed by crawling Wikipedia, that maps each distinct hyperlink anchor text to its target Wikipedia titles. For example, the anchor text “Chicago” is used in Wikipedia to refer both to the city in Illinois and to the movie. Anchor texts in the index that appear in document d are used to supplement the mention set M in Step 1 of the GLOW algorithm in Figure 2. Because checking all substrings Baseline Feature: P(t|m), P(t) Local Features: φi(t, m) cosine-sim(Text(t),Text(m)) : Naive/Reweighted cosine-sim(Text(t),Context(m)): Naive/Reweighted cosine-sim(Context(t),Text(m)): Naive/Reweighted cosine-sim(Context(t),Context(m)): Naive/Reweighted Global Features: ψi(ti, tj) I[ti−tj]∗PMI(InLinks(ti),InLinks(tj)) : avg/max I[ti−tj]∗NGD(InLinks(ti),InLinks(tj)) : avg/max I[ti−tj]∗PMI(OutLinks(ti),OutLinks(tj)) : avg/max I[ti−tj]∗NGD(OutLinks(ti),OutLinks(tj)) : avg/max I[ti↔tj] : avg/max I[ti↔tj]∗PMI(InLinks(ti),InLinks(tj)) : avg/max I[ti↔tj]∗NGD(InLinks(ti),InLinks(tj)) : avg/max I[ti↔tj]∗PMI(OutLinks(ti),OutLinks(tj)) : avg/max I[ti↔tj]∗NGD(OutLinks(ti),OutLinks(tj)) : avg/max Table 1: Ranker features. I[ti−tj] is an indicator variable which is 1 iff ti links to tj or vise-versa. I[ti↔tj] is 1 iff the titles point to each other. in the input text against the index is computationally inefficient, we instead prune the search space by applying a publicly available shallow parser and named entity recognition system.3 We consider only the expressions marked as named entities by the NER tagger, the noun-phrase chunks extracted by the shallow parser, and all sub-expressions of up to 5 tokens of the noun-phrase chunks. To retrieve the disambiguation candidates Ti for a given mention mi in Step 2 of the algorithm, we query the anchor-title index. Ti is taken to be the set of titles most frequently linked to with anchor text mi in Wikipedia. For computational efficiency, we utilize only the top 20 most frequent target pages for the anchor text; the accuracy impact of this optimization is analyzed in Section 6. From the anchor-title index, we compute two local features φi(m, t). The first, P(t|m), is the fraction of times the title t is the target page for an anchor text m. This single feature is a very reliable indicator of the correct disambiguation (Fader et al., 2009), and we use it as a baseline in our experiments. The second, P(t), gives the fraction of all Wikipedia articles that link to t. 4.2 Local Features φ In addition to the two baseline features mentioned in the previous section, we compute a set of text-based 3Available at http://cogcomp.cs.illinois.edu/page/software. 1378 local features φ(t, m). These features capture the intuition that a given Wikipedia title t is more likely to be referred to by mention m appearing in document d if the Wikipedia page for t has high textual similarity to d, or if the context surrounding hyperlinks to t are similar to m’s context in d. For each Wikipedia title t, we construct a top200 token TF-IDF summary of the Wikipedia page t, which we denote as Text(t) and a top-200 token TF-IDF summary of the context within which t was hyperlinked to in Wikipedia, which we denote as Context(t). We keep the IDF vector for all tokens in Wikipedia, and given an input mention m in a document d, we extract the TF-IDF representation of d, which we denote Text(d), and a TF-IDF representation of a 100-token window around m, which we denote Context(m). This allows us to define four local features described in Table 1. We additionally compute weighted versions of the features described above. Error analysis has shown that in many cases the summaries of the different disambiguation candidates for the same surface form s were very similar. For example, consider the disambiguation candidates of “China’ and their TF-IDF summaries in Figure 1. The majority of the terms selected in all summaries refer to the general issues related to China, such as “legalism, reform, military, control, etc.”, while a minority of the terms actually allow disambiguation between the candidates. The problem stems from the fact that the TF-IDF summaries are constructed against the entire Wikipedia, and not against the confusion set of disambiguation candidates of m. Therefore, we re-weigh the TF-IDF vectors using the TF-IDF scheme on the disambiguation candidates as a adhoc document collection, similarly to an approach in (Joachims, 1997) for classifying documents. In our scenario, the TF of the a token is the original TF-IDF summary score (a real number), and the IDF term is the sum of all the TF-IDF scores for the token within the set of disambiguation candidates for m. This adds 4 more “reweighted local” features in Table 1. 4.3 Global Features ψ Global approaches require a disambiguation context Γ′ and a relatedness measure ψ in Eq. 3. In this section, we describe our method for generating a disambiguation context, and the set of global features ψi(t, t′) forming our relatedness measure. In previous work, Cucerzan defined the disambiguation context as the union of disambiguation candidates for all the named entity mentions in the input document (2007). The disadvantage of this approach is that irrelevant titles are inevitably added to the disambiguation context, creating noise. Milne and Witten, on the other hand, use a set of unambiguous mentions (2008b). This approach utilizes only a fraction of the available mentions for context, and relies on the presence of unambiguous mentions with high disambiguation utility. In GLOW, we utilize a simple and efficient alternative approach: we first train a local disambiguation system, and then use the predictions of that system as the disambiguation context. The advantage of this approach is that unlike (Milne and Witten, 2008b) we use all the available mentions in the document, and unlike (Cucerzan, 2007) we reduce the amount of irrelevant titles in the disambiguation context by taking only the top-ranked disambiguation per mention. Our global features are refinements of previously proposed semantic relatedness measures between Wikipedia titles. We are aware of two previous methods for estimating the relatedness between two Wikipedia concepts: (Strube and Ponzetto, 2006), which uses category overlap, and (Milne and Witten, 2008a), which uses the incoming link structure. Previous work experimented with two relatedness measures: NGD, and Specificity-weighted Cosine Similarity. Consistent with previous work, we found NGD to be the better-performing of the two. Thus we use only NGD along with a well-known Pontwise Mutual Information (PMI) relatedness measure. Given a Wikipedia title collection W, titles t1 and t2 with a set of incoming links L1, and L2 respectively, PMI and NGD are defined as follows: NGD(L1, L2) = Log(Max(|L1|, |L2|)) −Log(|L1 ∩L2|) Log(|W |) −Log(Min(|L1|, |L2|)) PMI(L1, L2) = |L1 ∩L2|/|W | |L1|/|W ||L2|/|W | The NGD and the PMI measures can also be computed over the set of outgoing links, and we include these as features as well. We also included a feature indicating whether the articles each link to one 1379 another. Lastly, rather than taking the sum of the relatedness scores as suggested by Eq. 3, we use two features: the average and the maximum relatedness to Γ′. We expect the average to be informative for many documents. The intuition for also including the maximum relatedness is that for longer documents that may cover many different subtopics, the maximum may be more informative than the average. We have experimented with other semantic features, such as category overlap or cosine similarity between the TF-IDF summaries of the titles, but these did not improve performance in our experiments. The complete set of global features used in GLOW is given in Table 1. 4.4 Linker Features Given the mention m and the top-ranked disambiguation t, the linker attempts to decide whether t is indeed the correct disambiguation of m. The linker includes the same features as the ranker, plus additional features we expect to be particularly relevant to the task. We include the confidence of the ranker in t with respect to second-best disambiguation t′, intended to estimate whether the ranker may have made a mistake. We also include several properties of the mention m: the entropy of the distribution P(t|m), the percent of Wikipedia titles in which m appears hyperlinked versus the percent of times m appears as plain text, whether m was detected by NER as a named entity, and a Good-Turing estimate of how likely m is to be out-of-Wikipedia concept based on the counts in P(t|m). 4.5 Linker and Ranker Training We train the coefficients for the ranker features using a linear Ranking Support Vector Machine, using training data gathered from Wikipedia. Wikipedia links are considered gold-standard links for the training process. The methods for compiling the Wikipedia training corpus are given in Section 5. We train the linker as a separate linear Support Vector Machine. Training data for the linker is obtained by applying the ranker on the training set. The mentions for which the top-ranked disambiguation did not match the gold disambiguation are treated as negative examples, while the mentions the ranker got correct serve as positive examples. Mentions/Distinct titles data set Gold Identified Solvable ACE 257/255 213/212 185/184 MSNBC 747/372 530/287 470/273 AQUAINT 727/727 601/601 588/588 Wikipedia 928/813 855/751 843/742 Table 2: Number of mentions and corresponding distinct titles by data set. Listed are (number of mentions)/(numberof distinct titles) for each data set, for each of three mention types. Gold mentions include all disambiguated mentions in the data set. Identified mentions are gold mentions whose correct disambiguations exist in GLOW’s author-title index. Solvable mentions are identified mentions whose correct disambiguations are among the candidates selected by GLOW (see Table 3). 5 Data sets and Evaluation Methodology We evaluate GLOW on four data sets, of which two are from previous work. The first data set, from (Milne and Witten, 2008b), is a subset of the AQUAINT corpus of newswire text that is annotated to mimic the hyperlink structure in Wikipedia. That is, only the first mentions of “important” titles were hyperlinked. Titles deemed uninteresting and redundant mentions of the same title are not linked. The second data set, from (Cucerzan, 2007), is taken from MSNBC news and focuses on disambiguating named entities after running NER and co-reference resolution systems on newsire text. In this case, all mentions of all the detected named entities are linked. We also constructed two additional data sets. The first is a subset of the ACE co-reference data set, which has the advantage that mentions and their types are given, and the co-reference is resolved. We asked annotators on Amazon’s Mechanical Turk to link the first nominal mention of each co-reference chain to Wikipedia, if possible. Finding the accuracy of a majority vote of these annotations to be approximately 85%, we manually corrected the annotations to obtain ground truth for our experiments. The second data set we constructed, Wiki, is a sample of paragraphs from Wikipedia pages. Mentions in this data set correspond to existing hyperlinks in the Wikipedia text. Because Wikipedia editors explicitly link mentions to Wikipedia pages, their anchor text tends to match the title of the linked-topage—as a result, in the overwhelming majority of 1380 cases, the disambiguation decision is as trivial as string matching. In an attempt to generate more challenging data, we extracted 10,000 random paragraphs for which choosing the top disambiguation according to P(t|m) results in at least a 10% ranker error rate. 40 paragraphs of this data was utilized for testing, while the remainder was used for training. The data sets are summarized in Table 2. The table shows the number of annotated mentions which were hyperlinked to non-null Wikipedia pages, and the number of titles in the documents (without counting repetitions). For example, the AQUAINT data set contains 727 mentions,4 all of which refer to distinct titles. The MSNBC data set contains 747 mentions mapped to non-null Wikipedia pages, but some mentions within the same document refer to the same titles. There are 372 titles in the data set, when multiple instances of the same title within one document are not counted. To isolate the performance of the individual components of GLOW, we use multiple distinct metrics for evaluation. Ranker accuracy, which measures the performance of the ranker alone, is computed only over those mentions with a non-null gold disambiguation that appears in the candidate set. It is equal to the fraction of these mentions for which the ranker returns the correct disambiguation. Thus, a perfect ranker should achieve a ranker accuracy of 1.0, irrespective of limitations of the candidate generator. Linker accuracy is defined as the fraction of all mentions for which the linker outputs the correct disambiguation (note that, when the title produced by the ranker is incorrect, this penalizes linker accuracy). Lastly, we evaluate our whole system against other baselines using a previously-employed “bag of titles” (BOT) evaluation (Milne and Witten, 2008b). In BOT, we compare the set of titles output for a document with the gold set of titles for that document (ignoring duplicates), and utilize standard precision, recall, and F1 measures. In BOT, the set of titles is collected from the mentions hyperlinked in the gold annotation. That is, if the gold annotation is { (China, People’s Republic of China), (Taiwan, Taiwan), (Jiangsu, Jiangsu)} 4The data set contains votes on how important the mentions are. We believe that the results in (Milne and Witten, 2008b) were reported on mentions which the majority of annotators considered important. In contrast, we used all the mentions. Generated data sets Candidates k ACE MSNBC AQUAINT Wiki 1 81.69 72.26 91.01 84.79 3 85.44 86.22 96.83 94.73 5 86.38 87.35 97.17 96.37 20 86.85 88.67 97.83 98.59 Table 3: Percent of “solvable” mentions as a function of the number of generated disambiguation candidates. Listed is the fraction of identified mentions m whose target disambiguation t is among the top k candidates ranked in descending order of P(t|m). and the predicted anotation is: { (China, People’s Republic of China), (China, History of China), (Taiwan, null), (Jiangsu, Jiangsu), (republic, Government)} , then the BOT for the gold annotation is: {People’s Republic of China, Taiwan, Jiangsu} , and the BOT for the predicted annotation is: {People’s Republic of China, History of China, Jiangsu} . The title Government is not included in the BOT for predicted annotation, because its associate mention republic did not appear as a mention in the gold annotation. Both the precision and the recall of the above prediction is 0.66. We note that in the BOT evaluation, following (Milne and Witten, 2008b) we consider all the titles within a document, even if some the titles were due to mentions we failed to identify.5 6 Experiments and Results In this section, we evaluate and analyze GLOW’s performance on the D2W task. We begin by evaluating the mention detection component (Step 1 of the algorithm). The second column of Table 2 shows how many of the “non-null” mentions and corresponding titles we could successfully identify (e.g. out of 747 mentions in the MSNBC data set, only 530 appeared in our anchor-title index). Missing entities were primarily due to especially rare surface forms, or sometimes due to idiosyncratic capitalization in the corpus. Improving the number of identified mentions substantially is non-trivial; (Zhou et al., 2010) managed to successfully identify only 59 more entities than we do in the MSNBC data set, using a much more powerful detection method based on search engine query logs. We generate disambiguation candidates for a 5We evaluate the mention identification stage in Section 6. 1381 Data sets Features ACE MSNBC AQUAINT Wiki P(t|m) 94.05 81.91 93.19 85.88 P(t|m)+Local Naive 95.67 84.04 94.38 92.76 Reweighted 96.21 85.10 95.57 93.59 All above 95.67 84.68 95.40 93.59 P(t|m)+Global NER 96.21 84.04 94.04 89.56 Unambiguous 94.59 84.46 95.40 89.67 Predictions 96.75 88.51 95.91 89.79 P(t|m)+Local+Global All features 97.83 87.02 94.38 94.18 Table 4: Ranker Accuracy. Bold values indicate the best performance in each feature group. The global approaches marginally outperform the local approaches on ranker accuracy , while combing the approaches leads to further marginal performance improvement. mention m using an anchor-title index, choosing the 20 titles with maximal P(t|m). Table 3 evaluates the accuracy of this generation policy. We report the percent of mentions for which the correct disambiguation is generated in the top k candidates (called “solvable” mentions). We see that the baseline prediction of choosing the disambiguation t which maximizes P(t|m) is very strong (80% of the correct mentions have maximal P(t|m) in all data sets except MSNBC). The fraction of solvable mentions increases until about five candidates per mention are generated, after which the increase is rather slow. Thus, we believe choosing a limit of 20 candidates per mention offers an attractive trade-off of accuracy and efficiency. The last column of Table 2 reports the number of solvable mentions and the corresponding number of titles with a cutoff of 20 disambiguation candidates, which we use in our experiments. Next, we evaluate the accuracy of the ranker. Table 4 compares the ranker performance with baseline, local and global features. The reweighted local features outperform the unweighted (“Naive”) version, and the global approach outperforms the local approach on all data sets except Wikipedia. As the table shows, our approach of defining the disambiguation context to be the predicted disambiguations of a simpler local model (“Predictions”) performs better than using NER entities as in (Cucerzan, 2007), or only the unambiguous entiData set Local Global Local+Global ACE 80.1 →82.8 80.6 →80.6 81.5 →85.1 MSNBC 74.9 →76.0 77.9 →77.9 76.5 →76.9 AQUAINT 93.5 →91.5 93.8 →92.1 92.3 →91.3 Wiki 92.2 →92.0 88.5 →87.2 92.8 →92.6 Table 5: Linker performance. The notation X →Y means that when linking all mentions, the linking accuracy is X, while when applying the trained linker, the performance is Y . The local approaches are better suited for linking than the global approaches. The linking accuracy is very sensitive to domain changes. System ACE MSNBC AQUAINT Wiki Baseline: P(t|m) 69.52 72.83 82.67 81.77 GLOW Local 75.60 74.39 84.52 90.20 GLOW Global 74.73 74.58 84.37 86.62 GLOW 77.25 74.88 83.94 90.54 M&W 72.76 68.49 83.61 80.32 Table 6: End systems performance - BOT F1. The performance of the full system (GLOW) is similar to that of the local version. GLOW outperforms (Milne and Witten, 2008b) on all data sets. ties as in (Milne and Witten, 2008b).6 Combining the local and the global approaches typically results in minor improvements. While the global approaches are most effective for ranking, the linking problem has different characteristics as shown in Table 5. We can see that the global features are not helpful in general for predicting whether the top-ranked disambiguation is indeed the correct one. Further, although the trained linker improves accuracy in some cases, the gains are marginal—and the linker decreases performance on some data sets. One explanation for the decrease is that the linker is trained on Wikipedia, but is being tested on nonWikipedia text which has different characteristics. However, in separate experiments we found that training a linker on out-of-Wikipedia text only increased test set performance by approximately 3 percentage points. Clearly, while ranking accuracy is high overall, different strategies are needed to achieve consistently high linking performance. A few examples from the ACE data set help il6In NER we used only the top prediction, because using all candidates as in (Cucerzan, 2007) proved prohibitively inefficient. 1382 lustrate the tradeoffs between local and global features in GLOW. The global system mistakenly links “<Dorothy Byrne>, a state coordinator for the Florida Green Party, said . . . ” to the British journalist, because the journalist sense has high coherence with other mentions in the newswire text. However, the local approach correctly maps the mention to null because of a lack of local contextual clues. On the other hand, in the sentence “Instead of Los Angeles International, for example, consider flying into <Burbank> or John Wayne Airport in Orange County, Calif.”, the local ranker links the mention Burbank to Burbank, California, while the global system correctly maps the entity to Bob Hope Airport, because the three airports mentioned in the sentence are highly related to one another. Lastly, in Table 6 we compare the end system BOT F1 performance. The local approach proves a very competitive baseline which is hard to beat. Combining the global and the local approach leads to marginal improvements. The full GLOW system outperforms the existing state-of-the-art system from (Milne and Witten, 2008b), denoted as M&W, on all data sets. We also compared our system with the recent TAGME Wikification system (Ferragina and Scaiella, 2010). However, TAGME is designed for a different setting than ours: extremely short texts, like Twitter posts. The TAGME RESTful API was unable to process some of our documents at once. We attempted to input test documents one sentence at a time, disambiguating each sentence independently, which resulted in poor performance (0.07 points in F1 lower than the P(t|m) baseline). This happened mainly because the same mentions were linked to different titles in different sentences, leading to low precision. An important question is why M&W underperforms the baseline on the MSNBC and Wikipedia data sets. In an error analysis, M&W performed poorly on the MSNBC data not due to poor disambiguations, but instead because the data set contains only named entities, which were often delimited incorrectly by M&W. Wikipedia was challenging for a different reason: M&W performs less well on the short (one paragraph) texts in that set, because they contain relatively few of the unambiguous entities the system relies on for disambiguation. 7 Conclusions We have formalized the Disambiguation to Wikipedia (D2W) task as an optimization problem with local and global variants, and analyzed the strengths and weaknesses of each. Our experiments revealed that previous approaches for global disambiguation can be improved, but even then the local disambiguation provides a baseline which is very hard to beat. As our error analysis illustrates, the primary remaining challenge is determining when a mention does not have a corresponding Wikipedia page. Wikipedia’s hyperlinks offer a wealth of disambiguated mentions that can be leveraged to train a D2W system. However, when compared with mentions from general text, Wikipedia mentions are disproportionately likely to have corresponding Wikipedia pages. Our initial experiments suggest that accounting for this bias requires more than simply training a D2W system on a moderate number of examples from non-Wikipedia text. Applying distinct semi-supervised and active learning approaches to the task is a primary area of future work. Acknowledgments This research supported by the Army Research Laboratory (ARL) under agreement W911NF-092-0053 and by the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. The third author was supported by a Microsoft New Faculty Fellowship. Any opinions, findings, conclusions or recommendations are those of the authors and do not necessarily reflect the view of the ARL, DARPA, AFRL, or the US government. References R. Bunescu and M. Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-06), Trento, Italy, pages 9–16, April. Ming-Wei Chang, Lev Ratinov, Dan Roth, and Vivek Srikumar. 2008. Importance of semantic representation: dataless classification. In Proceedings of the 1383 23rd national conference on Artificial intelligence Volume 2, pages 830–835. AAAI Press. Rudi L. Cilibrasi and Paul M. B. Vitanyi. 2007. The google similarity distance. IEEE Trans. on Knowl. and Data Eng., 19(3):370–383. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 708–716, Prague, Czech Republic, June. Association for Computational Linguistics. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2009. Scaling wikipedia-based named entity disambiguation to arbitrary web text. In Proceedings of the WikiAI 09 - IJCAI Workshop: User Contributed Knowledge and Artificial Intelligence: An Evolving Synergy, Pasadena, CA, USA, July. Paolo Ferragina and Ugo Scaiella. 2010. Tagme: on-thefly annotation of short text fragments (by wikipedia entities). In Jimmy Huang, Nick Koudas, Gareth J. F. Jones, Xindong Wu, Kevyn Collins-Thompson, and Aijun An, editors, Proceedings of the 19th ACM conference on Information and knowledge management, pages 1625–1628. ACM. Tim Finin, Zareen Syed, James Mayfield, Paul McNamee, and Christine Piatko. 2009. Using Wikitology for Cross-Document Entity Coreference Resolution. In Proceedings of the AAAI Spring Symposium on Learning by Reading and Learning to Read. AAAI Press, March. Evgeniy Gabrilovich and Shaul Markovitch. 2007a. Computing semantic relatedness using wikipediabased explicit semantic analysis. In Proceedings of the 20th international joint conference on Artifical intelligence, pages 1606–1611, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Evgeniy Gabrilovich and Shaul Markovitch. 2007b. Harnessing the expertise of 70,000 human editors: Knowledge-based feature generation for text categorization. J. Mach. Learn. Res., 8:2297–2345, December. Xianpei Han and Jun Zhao. 2009. Named entity disambiguation by leveraging wikipedia semantic knowledge. In Proceeding of the 18th ACM conference on Information and knowledge management, CIKM ’09, pages 215–224, New York, NY, USA. ACM. Thorsten Joachims. 1997. A probabilistic analysis of the rocchio algorithm with tfidf for text categorization. In Proceedings of the Fourteenth International Conference on Machine Learning, ICML ’97, pages 143–151, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of wikipedia entities in web text. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’09, pages 457–466, New York, NY, USA. ACM. James Mayfield, David Alexander, Bonnie Dorr, Jason Eisner, Tamer Elsayed, Tim Finin, Clay Fink, Marjorie Freedman, Nikesh Garera, James Mayfield, Paul McNamee, Saif Mohammad, Douglas Oard, Christine Piatko, Asad Sayeed, Zareen Syed, and Ralph Weischede. 2009. Cross-Document Coreference Resolution: A Key Technology for Learning by Reading. In Proceedings of the AAAI 2009 Spring Symposium on Learning by Reading and Learning to Read. AAAI Press, March. Rada Mihalcea and Andras Csomai. 2007. Wikify!: linking documents to encyclopedic knowledge. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, CIKM ’07, pages 233–242, New York, NY, USA. ACM. David Milne and Ian H. Witten. 2008a. An effective, low-cost measure of semantic relatedness obtained from wikipedia links. In In the Wikipedia and AI Workshop of AAAI. David Milne and Ian H. Witten. 2008b. Learning to link with wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge management, CIKM ’08, pages 509–518, New York, NY, USA. ACM. Michael Strube and Simone Paolo Ponzetto. 2006. Wikirelate! computing semantic relatedness using wikipedia. In proceedings of the 21st national conference on Artificial intelligence - Volume 2, pages 1419– 1424. AAAI Press. Yiping Zhou, Lan Nie, Omid Rouhani-Kalleh, Flavian Vasile, and Scott Gaffney. 2010. Resolving surface forms to wikipedia topics. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1335–1343, Beijing, China, August. Coling 2010 Organizing Committee. 1384
2011
138
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1385–1394, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Stacked Sub-Word Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging Weiwei Sun Department of Computational Linguistics, Saarland University German Research Center for Artificial Intelligence (DFKI) D-66123, Saarbr¨ucken, Germany [email protected] Abstract The large combined search space of joint word segmentation and Part-of-Speech (POS) tagging makes efficient decoding very hard. As a result, effective high order features representing rich contexts are inconvenient to use. In this work, we propose a novel stacked subword model for this task, concerning both efficiency and effectiveness. Our solution is a two step process. First, one word-based segmenter, one character-based segmenter and one local character classifier are trained to produce coarse segmentation and POS information. Second, the outputs of the three predictors are merged into sub-word sequences, which are further bracketed and labeled with POS tags by a fine-grained sub-word tagger. The coarse-to-fine search scheme is efficient, while in the sub-word tagging step rich contextual features can be approximately derived. Evaluation on the Penn Chinese Treebank shows that our model yields improvements over the best system reported in the literature. 1 Introduction Word segmentation and part-of-speech (POS) tagging are necessary initial steps for more advanced Chinese language processing tasks, such as parsing and semantic role labeling. Joint approaches that resolve the two tasks simultaneously have received much attention in recent research. Previous work has shown that joint solutions led to accuracy improvements over pipelined systems by avoiding segmentation error propagation and exploiting POS information to help segmentation. A challenge for joint approaches is the large combined search space, which makes efficient decoding and structured learning of parameters very hard. Moreover, the representation ability of models is limited since using rich contextual word features makes the search intractable. To overcome such efficiency and effectiveness limitations, the approximate inference and reranking techniques have been explored in previous work (Zhang and Clark, 2010; Jiang et al., 2008b). In this paper, we present an effective and efficient solution for joint Chinese word segmentation and POS tagging. Our work is motivated by several characteristics of this problem. First of all, a majority of words are easy to identify in the segmentation problem. For example, a simple maximum matching segmenter can achieve an f-score of about 90. We will show that it is possible to improve the efficiency and accuracy by using different strategies for different words. Second, segmenters designed with different views have complementary strength. We argue that the agreements and disagreements of different solvers can be used to construct an intermediate sub-word structure for joint segmentation and tagging. Since the sub-words are large enough in practice, the decoding for POS tagging over subwords is efficient. Finally, the Chinese language is characterized by the lack of morphology that often provides important clues for POS tagging, and the POS tags contain much syntactic information, which need context information within a large window for disambiguation. For example, Huang et al. (2007) showed the effectiveness of utilizing syntactic information to rerank POS tagging results. As a result, the capability to represent rich contextual features is crucial to a POS tagger. In this work, we use a representation-efficiency tradeoff through stacked learning, a way of approximating rich non-local fea1385 tures. This paper describes a novel stacked sub-word model. Given multiple word segmentations of one sentence, we formally define a sub-word structure that maximizes the agreement of non-word-break positions. Based on the sub-word structure, joint word segmentation and POS tagging is addressed as a two step process. In the first step, one word-based segmenter, one character-based segmenter and one local character classifier are used to produce coarse segmentation and POS information. The results of the three predictors are then merged into sub-word sequences, which are further bracketed and labeled with POS tags by a fine-grained sub-word tagger. If a string is consistently segmented as a word by the three segmenters, it will be a correct word prediction with a very high probability. In the sub-word tagging phase, the fine-grained tagger mainly considers its POS tag prediction problem. For the words that are not consistently predicted, the fine-grained tagger will also consider their bracketing problem. The coarse-to-fine scheme significantly improves the efficiency of decoding. Furthermore, in the sub-word tagging step, word features in a large window can be approximately derived from the coarse segmentation and tagging results. To train a good sub-word tagger, we use the stacked learning technique, which can effectively correct the training/test mismatch problem. We conduct our experiments on the Penn Chinese Treebank and compare our system with the stateof-the-art systems. We present encouraging results. Our system achieves an f-score of 98.17 for the word segmentation task and an f-score of 94.02 for the whole task, resulting in relative error reductions of 14.1% and 5.5% respectively over the best system reported in the literature. The remaining part of the paper is organized as follows. Section 2 gives a brief introduction to the problem and reviews the relevant previous research. Section 3 describes the details of our method. Section 4 presents experimental results and empirical analyses. Section 5 concludes the paper. 2 Background 2.1 Problem Definition Given a sequence of characters c = (c1, ..., c#c), the task of word segmentation and POS tagging is to predict a sequence of word and POS tag pairs y = (⟨w1, p1⟩, ⟨w#y, p#y⟩), where wi is a word, pi is its POS tag, and a “#” symbol denotes the number of elements in each variable. In order to avoid error propagation and make use of POS information for word segmentation, the two tasks should resolved jointly. Previous research has shown that the integrated methods outperformed pipelined systems (Ng and Low, 2004; Jiang et al., 2008a; Zhang and Clark, 2008). 2.2 Character-Based and Word-Based Methods Two kinds of approaches are popular for joint word segmentation and POS tagging. The first is the “character-based” approach, where basic processing units are characters which compose words. In this kind of approach, the task is formulated as the classification of characters into POS tags with boundary information. Both the IOB2 representation (Ramshaw and Marcus, 1995) and the Start/End representation (Kudo and Matsumoto, 2001) are popular. For example, the label B-NN indicates that a character is located at the begging of a noun. Using this method, POS information is allowed to interact with segmentation. Note that word segmentation can also be formulated as a sequential classification problem to predict whether a character is located at the beginning of, inside or at the end of a word. This character-by-character method for segmentation was first proposed in (Xue, 2003), and was then further used in POS tagging in (Ng and Low, 2004). One main disadvantage of this model is the difficulty in incorporating the whole word information. The second kind of solution is the “word-based” method, where the basic predicting units are words themselves. This kind of solver sequentially decides whether the local sequence of characters makes up a word as well as its possible POS tag. In particular, a word-based solver reads the input sentence from left to right, predicts whether the current piece of continuous characters is a word token and which class it belongs to. Solvers may use previously predicted words and their POS information as clues to find a new word. After one word is found and classified, solvers move on and search for the next possible word. This word-by-word method for segmentation was first proposed in (Zhang and Clark, 2007), 1386 and was then further used in POS tagging in (Zhang and Clark, 2008). In our previous work(Sun, 2010), we presented a theoretical and empirical comparative analysis of character-based and word-based methods for Chinese word segmentation. We showed that the two methods produced different distributions of segmentation errors in a way that could be explained by theoretical properties of the two models. A system combination method that leverages the complementary strength of word-based and character-based segmentation models was also successfully explored in their work. Different from our previous focus, the diversity of different models designed with different views is utilized to construct sub-word structures in this work. We will discuss the details in the next section. 2.3 Stacked Learning Stacked generalization is a meta-learning algorithm that was first proposed in (Wolpert, 1992) and (Breiman, 1996). The idea is to include two “levels” of predictors. The first level includes one or more predictors g1, ...gK : Rd →R; each receives input x ∈Rd and outputs a prediction gk(x). The second level consists of a single function h : Rd+K →R that takes as input ⟨x, g1(x), ..., gK(x)⟩and outputs a final prediction ˆy = h(x, g1(x), ..., gK(x)). Training is done as follows. The training data S = {(xt, yt) : t ∈[1, T]} is split into L equal-sized disjoint subsets S1, ..., SL. Then functions g1, ..., gL (where gl = ⟨gl 1, ..., gl K⟩) are seperately trained on S −Sl, and are used to construct the augmented dataset ˆS = {(⟨xt, ˆy1 t , ..., ˆyK t ⟩, yt) : ˆyk t = gl k(xt) and xt ∈Sl}. Finally, each gk is trained on the original dataset and the second level predictor h is trained on ˆS. The intent of the cross-validation scheme is that yk t is similar to the prediction produced by a predictor which is learned on a sample that does not include xt. Stacked learning has been applied as a system ensemble method in several NLP tasks, such as named entity recognition (Wu et al., 2003) and dependency parsing (Nivre and McDonald, 2008). This framework is also explored as a solution for learning nonlocal features in (Torres Martins et al., 2008). In the machine learning research, stacked learning has been applied to structured prediction (Cohen and Carvalho, 2005). In this work, stacked learning is used to acquire extended training data for sub-word tagging. 3 Method 3.1 Architecture In our stacked sub-word model, joint word segmentation and POS tagging is decomposed into two steps: (1) coarse-grained word segmentation and tagging, and (2) fine-grained sub-word tagging. The workflow is shown in Figure 1. In the first phase, one word-based segmenter (SegW) and one characterbased segmenter (SegC) are trained to produce word boundaries. Additionally, a local character-based joint segmentation and tagging solver (SegTagL) is used to provide word boundaries as well as inaccurate POS information. Here, the word local means the labels of nearby characters are not used as features. In other words, the local character classifier assumes that the tags of characters are independent of each other. In the second phase, our system first combines the three segmentation and tagging results to get sub-words which maximize the agreement about word boundaries. Finally, a fine-grained sub-word tagger (SubTag) is applied to bracket subwords into words and also to obtain their POS tags. Raw sentences Character-based segmenter SegC Local character classifier SegTagL Word-based Segmenter SegW Segmented sentences Segmented sentences Segmented sentences Merging Sub-word sequences Sub-word tagger SubTag Figure 1: Workflow of the stacked sub-word model. In our model, segmentation and POS tagging interact with each other in two processes. First, although SegTagL is locally trained, it resolves the 1387 two sub-tasks simultaneously. Therefore, in the subword generating stage, segmentation and POS tagging help each other. Second, in the sub-word tagging stage, the bracketing and the classification of sub-words are jointly resolved as one sequence labeling problem. Our experiments on the Penn Chinese Treebank will show that the word-based and character-based segmenters and the local tagger on their own produce high quality word boundaries. As a result, the oracle performance to recover words from a subword sequence is very high. The quality of the final tagger relies on the quality of the sub-word tagger. If a high performance sub-word tagger can be constructed, the whole task can be well resolved. The statistics will also empirically show that subwords are significantly larger than characters and only slightly smaller than words. As a result, the search space of the sub-word tagging is significantly shrunken, and exact Viterbi decoding without approximately pruning can be efficiently processed. This property makes nearly all popular sequence labeling algorithms applicable. Zhang et al. (2006) described a sub-word based tagging model to resolve word segmentation. To get the pieces which are larger than characters but smaller than words, they combine a character-based segmenter and a dictionary matching segmenter. Our contributions include (1) providing a formal definition of our sub-word structure that is based on multiple segmentations and (2) proposing a stacking method to acquire sub-words. 3.2 The Coarse-grained Solvers We systematically described the implementation of two state-of-the-art Chinese word segmenters in word-based and character-based architectures, respectively (Sun, 2010). Our word-based segmenter is based on a discriminative joint model with a first order semi-Markov structure, and the other segmenter is based on a first order Markov model. Exact Viterbi-style search algorithms are used for decoding. Limited to the document length, we do not give the description of the features. We refer readers to read the above paper for details. For parameter estimation, our work adopt the Passive-Aggressive (PA) framework (Crammer et al., 2006), a family of margin based online learning algorithms. In this work, we introduce two simple but important refinements: (1) to shuffle the sample orders in each iteration and (2) to average the parameters in each iteration as the final parameters. Idiom In linguistics, idioms are usually presumed to be figures of speech contradicting the principle of compositionality. As a result, it is very hard to recognize out-of-vocabulary idioms for word segmentation. However, the lexicon of idioms can be taken as a close set, which helps resolve the problem well. We collect 12992 idioms1 from several online Chinese dictionaries. For both word-based and character-based segmentation, we first match every string of a given sentence with idioms. Every sentence is then splitted into smaller pieces which are seperated by idioms. Statistical segmentation models are then performed on these smaller character sequences. We use a local classifier to predict the POS tag with positional information for each character. Each character can be assigned one of two possible boundary tags: “B” for a character that begins a word and “I” for a character that occurs in the middle of a word. We denote a candidate character token ci with a fixed window ci−2ci−1cici+1ci+2. The following features are used: • character uni-grams: ck (i −2 ≤k ≤i + 2) • character bi-grams: ckck+1 (i −2 ≤k ≤i + 1) To resolve the classification problem, we use the linear SVM classifier LIBLINEAR2. 3.3 Merging Multiple Segmentation Results into Sub-Word Sequences A majority of words are easy to identify in the segmentation problem. We favor the idea treating different words using different strategies. In this work we try to identify simple and difficult words first and to integrate them into a sub-word level. Inspired by previous work, we constructed this sub-word structure by using multiple solvers designed from different views. If a piece of continuous characters is consistently segmented by multiple segmenters, it will 1This resource is publicly available at http://www. coli.uni-saarland.de/˜wsun/idioms.txt. 2Available at http://www.csie.ntu.edu.tw/ ˜cjlin/liblinear/. 1388 以 总 成 绩 3 5 5 . 3 5 分 居 领 先 地 位 Answer: [P] [JJ] [ NN ] [ CD ] [M] [VV] [ JJ ] [ NN ] SegW: [] [] [ ] [ ] [ ] [ ] [ ] [ ] SegC: [] [] [ ] [ ] [] [ ] [ ] SegTagL: [P] [JJ] [ NN ] [ CD ] [NT] [CD] [NT] [VV] [ VV ] [ NN ] Sub-words: [P] [JJ] [ NN ] [ B-CD ] [I-CD] [NT] [CD] [NT] [VV] [ VV ] [ NN ] Figure 2: An example phrase: 以总成绩355.35分居领先地位(Being in front with a total score of 355.35 points). not be separated in the sub-word tagging step. The intuition is that strings which are consistently segmented by the different segmenters tend to be correct predictions. In our experiment on the Penn Chinese Treebank (Xue et al., 2005), the accuracy is 98.59% on the development data which is defined in the next section. The key point for the intermediate sub-word structures is to maximize the agreement of the three coarse-grained systems. In other words, the goal is to make merged sub-words as large as possible but not overlap with any predicted word produced by the three coarse-grained solvers. In particular, if the position between two continuous characters is predicted as a word boundary by any segmenter, this position is taken as a separation position of the sub-word sequence. This strategy makes sure that it is still possible to re-segment the strings of which the boundaries are disagreed with by the coarse-grained segmenters in the fine-grained tagging stage. The formal definition is as follows. Given a sequence of characters c = (c1, ..., c#c), let c[i : j] denote a string that is made up of characters between ci and cj (including ci and cj), then a partition of the sentence can be written as c[0 : e1], c[e1 + 1 : e2], ..., c[em : #c]. Let sk = {c[i : j]} denote the set of all segments of a partition. Given multiple partitions of a character sequence S = {sk}, there is one and only one merged partition sS = {c[i : j]} s.t. 1. ∀c[i : j] ∈sS, ∀sk ∈S, ∃c[s : e] ∈sk, s ≤ i ≤j ≤e. 2. ∀C′ satisfies the above condition, |C′| > |C|. The first condition makes sure that all segments in the merged partition can be only embedded in but do not overlap with any segment of any partition from S. The second condition promises that segments of the merged partition achieve maximum length. Figure 2 is an example to illustrate the procedure of our method. The lines SegW, SegC and SegTagL are the predictions of the three coarsegrained solvers. For the three words at the beginning and the two words at the end, the three predictors agree with each other. And these five words are kept as sub-words. For the character sequence “3 55.35分居”, the predictions are very different. Because there are no word break predictions among the first three characters “355”, it is as a whole taken as one sub-word. For the other five characters, either the left position or the right position is segmented as a word break by some predictor, so the merging processor seperates them and takes each one as a single sub-word. The last line shows the merged sub-word sequence. The coarsegrained POS tags with positional information are derived from the labels provided by SegTagL. 3.4 The Fine-grained Sub-Word Tagger Bracketing sub-words into words is formulated as a IOB-style sequential classification problem. Each sub-word may be assigned with one POS tag as well as two possible boundary tags: “B” for the beginning position and “I” for the middle position. A tagger is trained to classify sub-word by using the features derived from its contexts. The sub-word level allows our system to utilize features in a large context, which is very important for POS tagging of the morphologically poor language. Features are formed making use of sub-word contents, their IOB-style inaccurate POS tags. In the following description, “C” refers to the content of the sub-word, while “T” refers to the IOB-style POS tags. For convenience, we denote a sub-word with its context ...si−2si−1sisi+1si+2..., where si is 1389 C(si−1)=“成绩”; T(si−1)=“NN” C(si)=“355”; T(si)=“B-CD” C(si+1)=“.”; T(si+1)=“I-CD” C(si−1)C(si)=“成绩355” T(si−1)T(si)=“NN B-CD” C(si)C(si+1)=“355.” T(si)T(si+1)=“B-CD I-CD” C(si−1)C(si+1)=“成绩.” T(si−1)T(si+1)=“B-NN I-CD” Prefix(1)=“3”; Prefix(2)=“35”; Prefix(3)=“355” Suffix(1)=“5”; Suffix(2)=“55”; Suffix(3)=“355” Table 1: An example of features used in the sub-word tagging. the current token. We denote lC, lT as the sizes of the window. • Uni-gram features: C(sk) (−lC ≤k ≤lC), T(sk) (−lT ≤k ≤lT ) • Bi-gram features: C(sk)C(sk+1) (−lC ≤k ≤ lC −1), T(sk)T(sk+1) (−lT ≤k ≤lT −1) • C(si−1)C(si+1) (if lC ≥1), T(si−1)T(si+1) (if lT ≥1) • T(si−2)T(si+1) (if lT ≥2) • In order to better handle unknown words, we also extract morphological features: character n-gram prefixes and suffixes for n up to 3. These features have been shown useful in previous research (Huang et al., 2007). Take the sub-word “355” in Figure 2 for example, when lC and lT are both set to 1, all features used are listed in Table 1. In the following experiments, we will vary window sizes lC and lT to find out the contribution of context information for the disambiguation. A first order Max-Margin Markov Networks model is used to resolve the sequence tagging problem. We use the SVM-HMM3 implementation for the experiments in this work. We use the basic linear model without applying any kernel function. 3Available at http://www.cs.cornell.edu/ People/tj/svm_light/svm_hmm.html. Algorithm 1: The stacked learning procedure for the sub-word tagger. input : Data S = {(ct, yt), t = 1, 2, ..., n} Split S into L partitions {S1, ...SL} for l = 1, ..., L do Train SegWl, SegCl and SegTagLl using S −Sl. Predict Sl using SegWl, SegCl and SegTagLl. Merge the predictions to get sub-words training sample S′ l. end Train the sub-word tagger SubTag using S′. 3.5 Stacked Learning for the Sub-Word Tagger The three coarse-grained solvers SegW, SegC and SegTagL are directly trained on the original training data. When these three predictors are used to produce the training data, the performance is perfect. However, this does not hold when these models are applied to the test data. If we directly apply SegW, SegC and SegTagL to extend the training data to generate sub-word samples, the extended training data for the sub-word tagger will be very different from the data in the run time, resulting in poor performance. One way to correct the training/test mismatch is to use the stacking method, where a K-fold crossvalidation on the original data is performed to construct the training data for sub-word tagging. Algorithm 1 illustrates the learning procedure. First, the training data S = {(ct, yt)} is split into L equalsized disjoint subsets S1, ..., SL. For each subset Sl, the complementary set S −Sl is used to train three coarse solvers SegWl, SegCl and SegTagLl, which process the Sl and provide inaccurate predictions. Then the inaccurate predictions are merged into subword sequences and Sl is extended to S′ l. Finally, the sub-word tagger is trained on the whole extended data set S′. 4 Experiments 4.1 Setting Previous studies on joint Chinese word segmentation and POS tagging have used the Penn Chinese Treebank (CTB) in experiments. We follow this set1390 ting in this paper. We use CTB 5.0 as our main corpus and define the training, development and test sets according to (Jiang et al., 2008a; Jiang et al., 2008b; Kruengkrai et al., 2009; Zhang and Clark, 2010). Table 2 shows the statistics of our experimental settings. Data set CTB files # of sent. # of words Training 1-270 18,089 493,939 400-931 1001-1151 Devel. 301-325 350 6821 Test 271-300 348 8008 Table 2: Training, development and test data on CTB 5.0 Three metrics are used for evaluation: precision (P), recall (R) and balanced f-score (F) defined by 2PR/(P+R). Precision is the relative amount of correct words in the system output. Recall is the relative amount of correct words compared to the gold standard annotations. For segmentation, a token is considered to be correct if its boundaries match the boundaries of a word in the gold standard. For the whole task, both the boundaries and the POS tag have to be correctly identified. 4.2 Performance of the Coarse-grained Solvers Table 3 shows the performance on the development data set of the three coarse-grained solvers. In this paper, we use 20 iterations to train SegW and SegC for all experiments. Even only locally trained, the character classifier SegTagL still significantly outperforms the two state-of-the-art segmenters SegW and SegC. This good performance indicates that the POS information is very important for word segmentation. Devel. Task P(%) R(%) F SegW Seg 94.55 94.84 94.69 SegC Seg 95.10 94.38 94.73 SegTagL Seg 95.67 95.98 95.83 Seg&Tag 87.54 91.29 89.38 Table 3: Performance of the coarse-grained solvers on the development data. 4.3 Statistics of Sub-Words Since the base predictors to generate coarse information are two word segmenters and a local character classifier, the coarse decoding is efficient. If the length of sub-words is too short, i.e. the decoding path for sub-word sequences are too long, the decoding of the fine-grained stage is still hard. Although we cannot give a theoretical average length of subwords, we can still show the empirical one. The average length of sub-words on the development set is 1.64, while the average length of words is 1.69. The number of all IOB-style POS tags is 59 (when using 5-fold cross-validation to generate stacked training samples). The number of all POS tags is 35. Empirically, the decoding over sub-words is 1.69 1.64×(59 35)n+1 times as slow as the decoding over words, where n is the order of the markov model. When a first order markov model is used, this number is 2.93. These statistics empirically suggest that the decoding over sub-word sequence can be efficient. On the other hand, the sub-word sequences are not perfect in the sense that they do not promise to recover all words because of the errors made in the first step. Similarly, we can only show the empirical upper bound of the sub-word tagging. The oracle performance of the final POS tagging on the development data set is shown in Table 4. The upper bound indicates that the coarse search procedure does not lose too much. Task P(%) R(%) F Seg&Tag 99.50% 99.09% 99.29 Table 4: Upper bound of the sub-word tagging on the development data. One main disadvantage of character-based approach is the difficulty to incorporate word features. Since the sub-words are on average close to words, sub-word features are good approximations of word features. 4.4 Rich Contextual Features Are Useful Table 5 shows the effect that features within different window size has on the sub-word tagging task. In this table, the symbol “C” means sub-word content features while the symbol “T” means IOB-style POS tag features. The number indicates the length 1391 Devel. P(%) R(%) F C:±0 T:±0 92.52 92.83 92.67 C:±1 T:±0 92.63 93.27 92.95 C:±1 T:±1 92.62 93.05 92.83 C:±2 T:±0 93.17 93.86 93.51 C:±2 T:±1 93.27 93.64 93.45 C:±2 T:±2 93.08 93.61 93.34 C:±3 T:±0 93.12 93.86 93.49 C:±3 T:±1 93.34 93.96 93.65 C:±3 T:±2 93.34 93.96 93.65 Table 5: Performance of the stacked sub-word model (K = 5) with features in different window sizes. of the window. For example, “C:±1” means that the tagger uses one preceding sub-word and one succeeding sub-word as features. From this table, we can clearly see the impact of features derived from neighboring sub-words. There is a significant increase between “C:±2” and “C:±1” models. This confirms our motivation that longer history and future features are crucial to the Chinese POS tagging problem. It is the main advantage of our model that making rich contextual features applicable. In all previous solutions, only features within a short history can be used due to the efficiency limitation. The performance is further slightly improved when the window size is increased to 3. Using the labeled bracketing f-score, the evaluation shows that the “C:±3 T:±1” model performs the same as the “C:±3 T:±2” model. However, the sub-word classification accuracy of the “C:±3 T:±1” model is higher, so in the following experiments and the final results reported on the test data set, we choose this setting. This table also suggests that the IOB-style POS information of sub-words does not contribute. We think there are two main reasons: (1) The POS information provided by the local classifier is inaccurate; (2) The structured learning of the sub-word tagger can use real predicted sub-word labels during its decoding time, since this learning algorithm does inference during the training time. It is still an open question whether more accurate POS information in rich contexts can help this task. If the answer is YES, how can we efficiently incorporate these features? 4.5 Stacked Learning Is Useful Table 6 compares the performance of “C:±3 T:±1” models trained with no stacking as well as different folds of cross-validation. We can see that although it is still possible to improve the segmentation and POS tagging performance compared to the local character classifier, the whole task just benefits only a little from the sub-word tagging procedure if the stacking technique is not applied. The stacking technique can significantly improve the system performance, both for segmentation and POS tagging. This experiment confirms the theoretical motivation of using stacked learning: simulating the test-time setting when a sub-word tagger is applied to a new instance. There is not much difference between the 5-fold and the 10-fold cross-validation. Devel. Task P(%) R(%) F No stacking Seg 95.75 96.48 96.12 Seg&Tag 91.42 92.13 91.77 K = 5 Seg 96.42 97.04 96.73 Seg&Tag 93.34 93.96 93.65 K = 10 Seg 96.67 97.11 96.89 Seg&Tag 93.50 94.06 93.78 Table 6: Performance on the development data. No stacking and different folds of cross-validation are separately applied. 4.6 Final Results Table 7 summarizes the performance of our final system on the test data and other systems reported in a majority of previous work. The final results of our system are achieved by using 10-fold crossvalidation “C:±3 T:±1” models. The left most column indicates the reference of previous systems that represent state-of-the-art results. The comparison of the accuracy between our stacked sub-word system and the state-of-the-art systems in the literature indicates that our method is competitive with the best systems. Our system obtains the highest f-score performance on both segmentation and the whole task, resulting in error reductions of 14.1% and 5.5% respectively. 1392 Test Seg Seg&Tag (Jiang et al., 2008a) 97.85 93.41 (Jiang et al., 2008b) 97.74 93.37 (Kruengkrai et al., 2009) 97.87 93.67 (Zhang and Clark, 2010) 97.78 93.67 Our system 98.17 94.02 Table 7: F-score performance on the test data. 5 Conclusion and Future Work This paper has described a stacked sub-word model for joint Chinese word segmentation and POS tagging. We defined a sub-word structure which maximizes the agreement of multiple segmentations provided by different segmenters. We showed that this sub-word structure could explore the complementary strength of different systems designed with different views. Moreover, the POS tagging could be efficiently and effectively resolved over sub-word sequences. To train a good sub-word tagger, we introduced a stacked learning procedure. Experiments showed that our approach was superior to the existing approaches reported in the literature. Machine learning and statistical approaches encounter difficulties when the input/output data have a structured and relational form. Research in empirical Natural Language Processing has been tackling these complexities since the early work in the field. Recent work in machine learning has provided several paradigms to globally represent and process such data: linear models for structured prediction, graphical models, constrained conditional models, and reranking, among others. A general expressivity-efficiency trade off is observed. Although the stacked sub-word model is an ad hoc solution for a particular problem, namely joint word segmentation and POS tagging, the idea to employ system ensemble and stacked learning in general provides an alternative for structured problems. Multiple “cheap” coarse systems are used to provide diverse outputs, which may be inaccurate. These outputs are further merged into an intermediate representation, which allows an extractive system to use rich contexts to predict the final results. A natural avenue for future work is the extension of our method to other NLP tasks. Acknowledgments The work is supported by the project TAKE (Technologies for Advanced Knowledge Extraction), funded under contract 01IW08003 by the German Federal Ministry of Education and Research. The author is also funded by German Academic Exchange Service (DAAD). The author would would like to thank Dr. Jia Xu for her helpful discussion, and Regine Bader for proofreading this paper. References Leo Breiman. 1996. Stacked regressions. Mach. Learn., 24:49–64, July. William W. Cohen and Vitor R. Carvalho. 2005. Stacked sequential learning. In Proceedings of the 19th international joint conference on Artificial intelligence, pages 671–676, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai ShalevShwartz, and Yoram Singer. 2006. Online passiveaggressive algorithms. JOURNAL OF MACHINE LEARNING RESEARCH, 7:551–585. Zhongqiang Huang, Mary Harper, and Wen Wang. 2007. Mandarin part-of-speech tagging and discriminative reranking. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1093–1102, Prague, Czech Republic, June. Association for Computational Linguistics. Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L¨u. 2008a. A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of ACL-08: HLT, pages 897–904, Columbus, Ohio, June. Association for Computational Linguistics. Wenbin Jiang, Haitao Mi, and Qun Liu. 2008b. Word lattice reranking for Chinese word segmentation and part-of-speech tagging. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 385–392, Manchester, UK, August. Coling 2008 Organizing Committee. Canasai Kruengkrai, Kiyotaka Uchimoto, Jun’ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An error-driven word-character hybrid model for joint Chinese word segmentation and pos tagging. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Process1393 ing of the AFNLP, pages 513–521, Suntec, Singapore, August. Association for Computational Linguistics. Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In NAACL ’01: Second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies 2001, pages 1–8, Morristown, NJ, USA. Association for Computational Linguistics. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-ofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based? In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 277– 284, Barcelona, Spain, July. Association for Computational Linguistics. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL-08: HLT, pages 950–958, Columbus, Ohio, June. Association for Computational Linguistics. L. A. Ramshaw and M. P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the 3rd ACL/SIGDAT Workshop on Very Large Corpora, Cambridge, Massachusetts, USA, pages 82–94. Weiwei Sun. 2010. Word-based and character-based word segmentation models: Comparison and combination. In Coling 2010: Posters, pages 1211–1219, Beijing, China, August. Coling 2010 Organizing Committee. Andr´e Filipe Torres Martins, Dipanjan Das, Noah A. Smith, and Eric P. Xing. 2008. Stacking dependency parsers. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 157–166, Honolulu, Hawaii, October. Association for Computational Linguistics. David H. Wolpert. 1992. Original contribution: Stacked generalization. Neural Netw., 5:241–259, February. Dekai Wu, Grace Ngai, and Marine Carpuat. 2003. A stacked, voted, stacked model for named entity recognition. In Walter Daelemans and Miles Osborne, editors, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 200–203. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Nianwen Xue. 2003. Chinese word segmentation as character tagging. In International Journal of Computational Linguistics and Chinese Language Processing. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 840–847, Prague, Czech Republic, June. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and POS tagging using a single perceptron. In Proceedings of ACL-08: HLT, pages 888–896, Columbus, Ohio, June. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2010. A fast decoder for joint word segmentation and POS-tagging using a single discriminative model. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 843–852, Cambridge, MA, October. Association for Computational Linguistics. Ruiqiang Zhang, Genichiro Kikui, and Eiichiro Sumita. 2006. Subword-based tagging by conditional random fields for Chinese word segmentation. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 193–196, New York City, USA, June. Association for Computational Linguistics. 1394
2011
139
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 132–141, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Using Multiple Sources to Construct a Sentiment Sensitive Thesaurus for Cross-Domain Sentiment Classification Danushka Bollegala The University of Tokyo 7-3-1, Hongo, Tokyo, 113-8656, Japan danushka@ iba.t.u-tokyo.ac.jp David Weir School of Informatics University of Sussex Falmer, Brighton, BN1 9QJ, UK d.j.weir@ sussex.ac.uk John Carroll School of Informatics University of Sussex Falmer, Brighton, BN1 9QJ, UK j.a.carroll@ sussex.ac.uk Abstract We describe a sentiment classification method that is applicable when we do not have any labeled data for a target domain but have some labeled data for multiple other domains, designated as the source domains. We automatically create a sentiment sensitive thesaurus using both labeled and unlabeled data from multiple source domains to find the association between words that express similar sentiments in different domains. The created thesaurus is then used to expand feature vectors to train a binary classifier. Unlike previous cross-domain sentiment classification methods, our method can efficiently learn from multiple source domains. Our method significantly outperforms numerous baselines and returns results that are better than or comparable to previous cross-domain sentiment classification methods on a benchmark dataset containing Amazon user reviews for different types of products. 1 Introduction Users express opinions about products or services they consume in blog posts, shopping sites, or review sites. It is useful for both consumers as well as for producers to know what general public think about a particular product or service. Automatic document level sentiment classification (Pang et al., 2002; Turney, 2002) is the task of classifying a given review with respect to the sentiment expressed by the author of the review. For example, a sentiment classifier might classify a user review about a movie as positive or negative depending on the sentiment expressed in the review. Sentiment classification has been applied in numerous tasks such as opinion mining (Pang and Lee, 2008), opinion summarization (Lu et al., 2009), contextual advertising (Fan and Chang, 2010), and market analysis (Hu and Liu, 2004). Supervised learning algorithms that require labeled data have been successfully used to build sentiment classifiers for a specific domain (Pang et al., 2002). However, sentiment is expressed differently in different domains, and it is costly to annotate data for each new domain in which we would like to apply a sentiment classifier. For example, in the domain of reviews about electronics products, the words “durable” and “light” are used to express positive sentiment, whereas “expensive” and “short battery life” often indicate negative sentiment. On the other hand, if we consider the books domain the words “exciting” and “thriller” express positive sentiment, whereas the words “boring” and “lengthy” usually express negative sentiment. A classifier trained on one domain might not perform well on a different domain because it would fail to learn the sentiment of the unseen words. Work in cross-domain sentiment classification (Blitzer et al., 2007) focuses on the challenge of training a classifier from one or more domains (source domains) and applying the trained classifier in a different domain (target domain). A crossdomain sentiment classification system must overcome two main challenges. First, it must identify which source domain features are related to which target domain features. Second, it requires a learning framework to incorporate the information re132 garding the relatedness of source and target domain features. Following previous work, we define crossdomain sentiment classification as the problem of learning a binary classifier (i.e. positive or negative sentiment) given a small set of labeled data for the source domain, and unlabeled data for both source and target domains. In particular, no labeled data is provided for the target domain. In this paper, we describe a cross-domain sentiment classification method using an automatically created sentiment sensitive thesaurus. We use labeled data from multiple source domains and unlabeled data from source and target domains to represent the distribution of features. We represent a lexical element (i.e. a unigram or a bigram of word lemma) in a review using a feature vector. Next, for each lexical element we measure its relatedness to other lexical elements and group related lexical elements to create a thesaurus. The thesaurus captures the relatedness among lexical elements that appear in source and target domains based on the contexts in which the lexical elements appear (their distributional context). A distinctive aspect of our approach is that, in addition to the usual co-occurrence features typically used in characterizing a word’s distributional context, we make use, where possible, of the sentiment label of a document: i.e. sentiment labels form part of our context features. This is what makes the distributional thesaurus sensitive to sentiment. Unlabeled data is cheaper to collect compared to labeled data and is often available in large quantities. The use of unlabeled data enables us to accurately estimate the distribution of words in source and target domains. Our method can learn from a large amount of unlabeled data to leverage a robust cross-domain sentiment classifier. We model the cross-domain sentiment classification problem as one of feature expansion, where we append additional related features to feature vectors that represent source and target domain reviews in order to reduce the mismatch of features between the two domains. Methods that use related features have been successfully used in numerous tasks such as query expansion (Fang, 2008), and document classification (Shen et al., 2009). However, feature expansion techniques have not previously been applied to the task of cross-domain sentiment classification. In our method, we use the automatically created thesaurus to expand feature vectors in a binary classifier at train and test times by introducing related lexical elements from the thesaurus. We use L1 regularized logistic regression as the classification algorithm. (However, the method is agnostic to the properties of the classifier and can be used to expand feature vectors for any binary classifier). L1 regularization enables us to select a small subset of features for the classifier. Unlike previous work which attempts to learn a cross-domain classifier using a single source domain, we leverage data from multiple source domains to learn a robust classifier that generalizes across multiple domains. Our contributions can be summarized as follows. • We describe a fully automatic method to create a thesaurus that is sensitive to the sentiment of words expressed in different domains. • We describe a method to use the created thesaurus to expand feature vectors at train and test times in a binary classifier. 2 A Motivating Example To explain the problem of cross-domain sentiment classification, consider the reviews shown in Table 1 for the domains books and kitchen appliances. Table 1 shows two positive and one negative review from each domain. We have emphasized in boldface the words that express the sentiment of the authors of the reviews. We see that the words excellent, broad, high quality, interesting, and well researched are used to express positive sentiment in the books domain, whereas the word disappointed indicates negative sentiment. On the other hand, in the kitchen appliances domain the words thrilled, high quality, professional, energy saving, lean, and delicious express positive sentiment, whereas the words rust and disappointed express negative sentiment. Although high quality would express positive sentiment in both domains, and disappointed negative sentiment, it is unlikely that we would encounter well researched in kitchen appliances reviews, or rust or delicious in book reviews. Therefore, a model that is trained only using book reviews might not have any weights learnt for delicious or rust, which would make it difficult for this model to accurately classify reviews of kitchen appliances. 133 books kitchen appliances + Excellent and broad survey of the development of civilization with all the punch of high quality fiction. I was so thrilled when I unpack my processor. It is so high quality and professional in both looks and performance. + This is an interesting and well researched book. Energy saving grill. My husband loves the burgers that I make from this grill. They are lean and delicious. Whenever a new book by Philippa Gregory comes out, I buy it hoping to have the same experience, and lately have been sorely disappointed. These knives are already showing spots of rust despite washing by hand and drying. Very disappointed. Table 1: Positive (+) and negative (-) sentiment reviews in two different domains. sentence Excellent and broad survey of the development of civilization. POS tags Excellent/JJ and/CC broad/JJ survey/NN1 of/IO the/AT development/NN1 of/IO civilization/NN1 lexical elements (unigrams) excellent, broad, survey, development, civilization lexical elements (bigrams) excellent+broad, broad+survey, survey+development, development+civilization sentiment features (lemma) excellent*P, broad*P, survey*P, excellent+broad*P, broad+survey*P sentiment features (POS) JJ*P, NN1*P, JJ+NN1*P Table 2: Generating lexical elements and sentiment features from a positive review sentence. 3 Sentiment Sensitive Thesaurus One solution to the feature mismatch problem outlined above is to use a thesaurus that groups different words that express the same sentiment. For example, if we know that both excellent and delicious are positive sentiment words, then we can use this knowledge to expand a feature vector that contains the word delicious using the word excellent, thereby reducing the mismatch between features in a test instance and a trained model. Below we describe a method to construct a sentiment sensitive thesaurus for feature expansion. Given a labeled or an unlabeled review, we first split the review into individual sentences. We carry out part-of-speech (POS) tagging and lemmatization on each review sentence using the RASP system (Briscoe et al., 2006). Lemmatization reduces the data sparseness and has been shown to be effective in text classification tasks (Joachims, 1998). We then apply a simple word filter based on POS tags to select content words (nouns, verbs, adjectives, and adverbs). In particular, previous work has identified adjectives as good indicators of sentiment (Hatzivassiloglou and McKeown, 1997; Wiebe, 2000). Following previous work in cross-domain sentiment classification, we model a review as a bag of words. We select unigrams and bigrams from each sentence. For the remainder of this paper, we will refer to unigrams and bigrams collectively as lexical elements. Previous work on sentiment classification has shown that both unigrams and bigrams are useful for training a sentiment classifier (Blitzer et al., 2007). We note that it is possible to create lexical elements both from source domain labeled reviews as well as from unlabeled reviews in source and target domains. Next, we represent each lexical element u using a set of features as follows. First, we select other lexical elements that co-occur with u in a review sentence as features. Second, from each source domain labeled review sentence in which u occurs, we create sentiment features by appending the label of the review to each lexical element we generate from that review. For example, consider the sentence selected from a positive review of a book shown in Table 2. In Table 2, we use the notation “*P” to indicate positive sentiment features and “*N” to indicate negative sentiment features. The example sentence shown in Table 2 is selected from a positively labeled review, and generates positive sentiment features as shown in Table 2. In addition to word-level sentiment features, we replace words with their POS tags to create 134 POS-level sentiment features. POS tags generalize the word-level sentiment features, thereby reducing feature sparseness. Let us denote the value of a feature w in the feature vector u representing a lexical element u by f(u, w). The vector u can be seen as a compact representation of the distribution of a lexical element u over the set of features that co-occur with u in the reviews. From the construction of the feature vector u described in the previous paragraph, it follows that w can be either a sentiment feature or another lexical element that co-occurs with u in some review sentence. The distributional hypothesis (Harris, 1954) states that words that have similar distributions are semantically similar. We compute f(u, w) as the pointwise mutual information between a lexical element u and a feature w as follows: f(u, w) = log c(u,w) N Pn i=1 c(i,w) N × Pm j=1 c(u,j) N ! (1) Here, c(u, w) denotes the number of review sentences in which a lexical element u and a feature w co-occur, n and m respectively denote the total number of lexical elements and the total number of features, and N = Pn i=1 Pm j=1 c(i, j). Pointwise mutual information is known to be biased towards infrequent elements and features. We follow the discounting approach of Pantel & Ravichandran (2004) to overcome this bias. Next, for two lexical elements u and v (represented by feature vectors u and v, respectively), we compute the relatedness τ(v, u) of the feature v to the feature u as follows, τ(v, u) = P w∈{x|f(v,x)>0} f(u, w) P w∈{x|f(u,x)>0} f(u, w). (2) Here, we use the set notation {x|f(v, x) > 0} to denote the set of features that co-occur with v. Relatedness of a lexical element u to another lexical element v is the fraction of feature weights in the feature vector for the element u that also co-occur with the features in the feature vector for the element v. If there are no features that co-occur with both u and v, then the relatedness reaches its minimum value of 0. On the other hand if all features that co-occur with u also co-occur with v, then the relatedness , τ(v, u), reaches its maximum value of 1. Note that relatedness is an asymmetric measure by the definition given in Equation 2, and the relatedness τ(v, u) of an element v to another element u is not necessarily equal to τ(u, v), the relatedness of u to v. We use the relatedness measure defined in Equation 2 to construct a sentiment sensitive thesaurus in which, for each lexical element u we list lexical elements v that co-occur with u (i.e. f(u, v) > 0) in descending order of relatedness values τ(v, u). In the remainder of the paper, we use the term base entry to refer to a lexical element u for which its related lexical elements v (referred to as the neighbors of u) are listed in the thesaurus. Note that relatedness values computed according to Equation 2 are sensitive to sentiment labels assigned to reviews in the source domain, because co-occurrences are computed over both lexical and sentiment elements extracted from reviews. In other words, the relatedness of an element u to another element v depends upon the sentiment labels assigned to the reviews that generate u and v. This is an important fact that differentiates our sentiment-sensitive thesaurus from other distributional thesauri which do not consider sentiment information. Moreover, we only need to retain lexical elements in the sentiment sensitive thesaurus because when predicting the sentiment label for target reviews (at test time) we cannot generate sentiment elements from those (unlabeled) reviews, therefore we are not required to find expansion candidates for sentiment elements. However, we emphasize the fact that the relatedness values between the lexical elements listed in the sentiment-sensitive thesaurus are computed using co-occurrences with both lexical and sentiment features, and therefore the expansion candidates selected for the lexical elements in the target domain reviews are sensitive to sentiment labels assigned to reviews in the source domain. Using a sparse matrix format and approximate similarity matching techniques (Sarawagi and Kirpal, 2004), we can efficiently create a thesaurus from a large set of reviews. 4 Feature Expansion Our feature expansion phase augments a feature vector with additional related features selected from the 135 sentiment-sensitive thesaurus created in Section 3 to overcome the feature mismatch problem. First, following the bag-of-words model, we model a review d using the set {w1, . . . , wN}, where the elements wi are either unigrams or bigrams that appear in the review d. We then represent a review d by a realvalued term-frequency vector d ∈RN, where the value of the j-th element dj is set to the total number of occurrences of the unigram or bigram wj in the review d. To find the suitable candidates to expand a vector d for the review d, we define a ranking score score(ui, d) for each base entry in the thesaurus as follows: score(ui, d) = PN j=1 djτ(wj, ui) PN l=1 dl (3) According to this definition, given a review d, a base entry ui will have a high ranking score if there are many words wj in the review d that are also listed as neighbors for the base entry ui in the sentimentsensitive thesaurus. Moreover, we weight the relatedness scores for each word wj by its normalized term-frequency to emphasize the salient unigrams and bigrams in a review. Recall that relatedness is defined as an asymmetric measure in Equation 2, and we use τ(wj, ui) in the computation of score(ui, d) in Equation 3. This is particularly important because we would like to score base entries ui considering all the unigrams and bigrams that appear in a review d, instead of considering each unigram or bigram individually. To expand a vector, d, for a review d, we first rank the base entries, ui using the ranking score in Equation 3 and select the top k ranked base entries. Let us denote the r-th ranked (1 ≤r ≤k) base entry for a review d by vr d. We then extend the original set of unigrams and bigrams {w1, . . . , wN} by the base entries v1 d, . . . , vk d to create a new vector d′ ∈R(N+k) with dimensions corresponding to w1, . . . , wN, v1 d, . . . , vk d for a review d. The values of the extended vector d′ are set as follows. The values of the first N dimensions that correspond to unigrams and bigrams wi that occur in the review d are set to di, their frequency in d. The subsequent k dimensions that correspond to the top ranked based entries for the review d are weighted according to their ranking score. Specifically, we set the value of the r-th ranked base entry vr d to 1/r. Alternatively, one could use the ranking score, score(vr d, d), itself as the value of the appended base entries. However, both relatedness scores as well as normalized termfrequencies can be small in practice, which leads to very small absolute ranking scores. By using the inverse rank, we only take into account the relative ranking of base entries and ignore their absolute scores. Note that the score of a base entry depends on a review d. Therefore, we select different base entries as additional features for expanding different reviews. Furthermore, we do not expand each wi individually when expanding a vector d for a review. Instead, we consider all unigrams and bigrams in d when selecting the base entries for expansion. One can think of the feature expansion process as a lower dimensional latent mapping of features onto the space spanned by the base entries in the sentiment-sensitive thesaurus. The asymmetric property of the relatedness (Equation 2) implicitly prefers common words that co-occur with numerous other words as expansion candidates. Such words act as domain independent pivots and enable us to transfer the information regarding sentiment from one domain to another. Using the extended vectors d′ to represent reviews, we train a binary classifier from the source domain labeled reviews to predict positive and negative sentiment in reviews. We differentiate the appended base entries vr d from wi that existed in the original vector d (prior to expansion) by assigning different feature identifiers to the appended base entries. For example, a unigram excellent in a feature vector is differentiated from the base entry excellent by assigning the feature id, “BASE=excellent” to the latter. This enables us to learn different weights for base entries depending on whether they are useful for expanding a feature vector. We use L1 regularized logistic regression as the classification algorithm (Ng, 2004), which produces a sparse model in which most irrelevant features are assigned a zero weight. This enables us to select useful features for classification in a systematic way without having to preselect features using heuristic approaches. The regularization parameter is set to its default value of 1 for all the experiments described in this paper. 136 5 Experiments 5.1 Dataset To evaluate our method we use the cross-domain sentiment classification dataset prepared by Blitzer et al. (2007). This dataset consists of Amazon product reviews for four different product types: books (B), DVDs (D), electronics (E) and kitchen appliances (K). There are 1000 positive and 1000 negative labeled reviews for each domain. Moreover, the dataset contains some unlabeled reviews (on average 17, 547) for each domain. This benchmark dataset has been used in much previous work on cross-domain sentiment classification and by evaluating on it we can directly compare our method against existing approaches. Following previous work, we randomly select 800 positive and 800 negative labeled reviews from each domain as training instances (i.e. 1600×4 = 6400); the remainder is used for testing (i.e. 400 × 4 = 1600). In our experiments, we select each domain in turn as the target domain, with one or more other domains as sources. Note that when we combine more than one source domain we limit the total number of source domain labeled reviews to 1600, balanced between the domains. For example, if we combine two source domains, then we select 400 positive and 400 negative labeled reviews from each domain giving (400 + 400) × 2 = 1600. This enables us to perform a fair evaluation when combining multiple source domains. The evaluation metric is classification accuracy on a target domain, computed as the percentage of correctly classified target domain reviews out of the total number of reviews in the target domain. 5.2 Effect of Feature Expansion To study the effect of feature expansion at train time compared to test time, we used Amazon reviews for two further domains, music and video, which were also collected by Blitzer et al. (2007) but are not part of the benchmark dataset. Each validation domain has 1000 positive and 1000 negative labeled reviews, and 15000 unlabeled reviews. Using the validation domains as targets, we vary the number of top k ranked base entries (Equation 3) used for feature expansion during training (Traink) and testing (Testk), and measure the average classification 0 200 400 600 800 1000 0 200 400 600 800 1000 Traink Testk 0.776 0.778 0.78 0.782 0.784 0.786 Figure 1: Feature expansion at train vs. test times. B D K B+D B+K D+K B+D+K 50 55 60 65 70 75 80 85 Source Domains Accuracy on electronics domain Figure 2: Effect of using multiple source domains. accuracy. Figure 1 illustrates the results using a heat map, where dark colors indicate low accuracy values and light colors indicate high accuracy values. We see that expanding features only at test time (the left-most column) does not work well because we have not learned proper weights for the additional features. Similarly, expanding features only at train time (the bottom-most row) also does not perform well because the expanded features are not used during testing. The maximum classification accuracy is obtained when Testk = 400 and Traink = 800, and we use these values for the remainder of the experiments described in the paper. 5.3 Combining Multiple Sources Figure 2 shows the effect of combining multiple source domains to build a sentiment classifier for the electronics domain. We see that the kitchen domain is the single best source domain when adapting to the electronics target domain. This behavior 137 0 200 400 600 800 40 45 50 55 60 65 70 75 80 85 Positive/Negative instances Accuracy B E K B+E B+K E+K B+E+K Figure 3: Effect of source domain labeled data. 0 0.2 0.4 0.6 0.8 1 50 55 60 65 70 Source unlabeled dataset size Accuracy B E K B+E B+K E+K B+E+K Figure 4: Effect of source domain unlabeled data. is explained by the fact that in general kitchen appliances and electronic items have similar aspects. But a more interesting observation is that the accuracy that we obtain when we use two source domains is always greater than the accuracy if we use those domains individually. The highest accuracy is achieved when we use all three source domains. Although not shown here for space limitations, we observed similar trends with other domains in the benchmark dataset. To investigate the impact of the quantity of source domain labeled data on our method, we vary the amount of data from zero to 800 reviews, with equal amounts of positive and negative labeled data. Figure 3 shows the accuracy with the DVD domain as the target. Note that source domain labeled data is used both to create the sentiment sensitive thesaurus as well as to train the sentiment classifier. When there are multiple source domains we limit and balance the number of labeled instances as outlined in Section 5.1. The amount of unlabeled data is held constant, so that any change in classification accu0 0.2 0.4 0.6 0.8 1 50 55 60 65 70 Target unlabeled dataset size Accuracy B E K B+E B+K E+K B+E+K Figure 5: Effect of target domain unlabeled data. racy is directly attributable to the source domain labeled instances. Because this is a binary classification task (i.e. positive vs. negative sentiment), a random classifier that does not utilize any labeled data would report a 50% classification accuracy. From Figure 3, we see that when we increase the amount of source domain labeled data the accuracy increases quickly. In fact, by selecting only 400 (i.e. 50% of the total 800) labeled instances per class, we achieve the maximum performance in most of the cases. To study the effect of source and target domain unlabeled data on the performance of our method, we create sentiment sensitive thesauri using different proportions of unlabeled data. The amount of labeled data is held constant and is balanced across multiple domains as outlined in Section 5.1, so any changes in classification accuracy can be directly attributed to the contribution of unlabeled data. Figure 4 shows classification accuracy on the DVD target domain when we vary the proportion of source domain unlabeled data (target domain’s unlabeled data is fixed). Likewise, Figure 5 shows the classification accuracy on the DVD target domain when we vary the proportion of the target domain’s unlabeled data (source domains’ unlabeled data is fixed). From Figures 4 and 5, we see that irrespective of the amount being used, there is a clear performance gain when we use unlabeled data from multiple source domains compared to using a single source domain. However, we could not observe a clear gain in performance when we increase the amount of the unlabeled data used to create the sentiment sensitive thesaurus. 138 Method K D E B No Thesaurus 72.61 68.97 70.53 62.72 SCL 80.83 74.56 78.43 72.76 SCL-MI 82.06 76.30 78.93 74.56 SFA 81.48 76.31 75.30 77.73 LSA 79.00 73.50 77.66 70.83 FALSA 80.83 76.33 77.33 73.33 NSS 77.50 73.50 75.50 71.46 Proposed 85.18 78.77 83.63 76.32 Within-Domain 87.70 82.40 84.40 80.40 Table 3: Cross-domain sentiment classification accuracy. 5.4 Cross-Domain Sentiment Classification Table 3 compares our method against a number of baselines and previous cross-domain sentiment classification techniques using the benchmark dataset. For all previous techniques we give the results reported in the original papers. The No Thesaurus baseline simulates the effect of not performing any feature expansion. We simply train a binary classifier using unigrams and bigrams as features from the labeled reviews in the source domains and apply the trained classifier on the target domain. This can be considered to be a lower bound that does not perform domain adaptation. SCL is the structural correspondence learning technique of Blitzer et al. (2006). In SCL-MI, features are selected using the mutual information between a feature (unigram or bigram) and a domain label. After selecting salient features, the SCL algorithm is used to train a binary classifier. SFA is the spectral feature alignment technique of Pan et al. (2010). Both the LSA and FALSA techniques are based on latent semantic analysis (Pan et al., 2010). For the Within-Domain baseline, we train a binary classifier using the labeled data from the target domain. This upper baseline represents the classification accuracy we could hope to obtain if we were to have labeled data for the target domain. Note that this is not a cross-domain classification setting. To evaluate the benefit of using sentiment features on our method, we give a NSS (non-sentiment sensitive) baseline in which we create a thesaurus without using any sentiment features. Proposed is our method. From Table 3, we see that our proposed method returns the best cross-domain sentiment classification accuracy (shown in boldface) for the three domains kitchen appliances, DVDs, and electronics. For the books domain, the best results are returned by SFA. The books domain has the lowest number of unlabeled reviews (around 5000) in the dataset. Because our method relies upon the availability of unlabeled data for the construction of a sentiment sensitive thesaurus, we believe that this accounts for our lack of performance on the books domain. However, given that it is much cheaper to obtain unlabeled than labeled data for a target domain, there is strong potential for improving the performance of our method in this domain. The analysis of variance (ANOVA) and Tukey’s honestly significant differences (HSD) tests on the classification accuracies for the four domains show that our method is statistically significantly better than both the No Thesaurus and NSS baselines, at confidence level 0.05. We therefore conclude that using the sentiment sensitive thesaurus for feature expansion is useful for cross-domain sentiment classification. The results returned by our method are comparable to state-ofthe-art techniques such as SCL-MI and SFA. In particular, the differences between those techniques and our method are not statistically significant. 6 Related Work Compared to single-domain sentiment classification, which has been studied extensively in previous work (Pang and Lee, 2008; Turney, 2002), crossdomain sentiment classification has only recently received attention in response to advances in the area of domain adaptation. Aue and Gammon (2005) report a number of empirical tests into domain adaptation of sentiment classifiers using an ensemble of classifiers. However, most of these tests were unable to outperform a simple baseline classifier that is trained using all labeled data for all domains. Blitzer et al. (2007) apply the structural correspondence learning (SCL) algorithm to train a crossdomain sentiment classifier. They first chooses a set of pivot features using pointwise mutual information between a feature and a domain label. Next, linear predictors are learnt to predict the occurrences of those pivots. Finally, they use singular value decomposition (SVD) to construct a lowerdimensional feature space in which a binary classi139 fier is trained. The selection of pivots is vital to the performance of SCL and heuristically selected pivot features might not guarantee the best performance on target domains. In contrast, our method uses all features when creating the thesaurus and selects a subset of features during training using L1 regularization. Moreover, we do not require SVD, which has cubic time complexity so can be computationally expensive for large datasets. Pan et al. (2010) use structural feature alignment (SFA) to find an alignment between domain specific and domain independent features. The mutual information of a feature with domain labels is used to classify domain specific and domain independent features. Next, spectral clustering is performed on a bipartite graph that represents the relationship between the two sets of features. Finally, the top eigenvectors are selected to construct a lower-dimensional projection. However, not all words can be cleanly classified into domain specific or domain independent, and this process is conducted prior to training a classifier. In contrast, our method lets a particular lexical entry to be listed as a neighour for multiple base entries. Moreover, we expand each feature vector individually and do not require any clustering. Furthermore, unlike SCL and SFA, which consider a single source domain, our method can efficiently adapt from multiple source domains. 7 Conclusions We have described and evaluated a method to construct a sentiment-sensitive thesaurus to bridge the gap between source and target domains in cross-domain sentiment classification using multiple source domains. Experimental results using a benchmark dataset for cross-domain sentiment classification show that our proposed method can improve classification accuracy in a sentiment classifier. In future, we intend to apply the proposed method to other domain adaptation tasks. Acknowledgements This research was conducted while the first author was a visiting research fellow at Sussex university under the postdoctoral fellowship of the Japan Society for the Promotion of Science (JSPS). References Anthony Aue and Michael Gamon. 2005. Customizing sentiment classifiers to new domains: a case study. Technical report, Microsoft Research. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP 2006. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL 2007, pages 440–447. Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the rasp system. In COLING/ACL 2006 Interactive Presentation Sessions. Teng-Kai Fan and Chia-Hui Chang. 2010. Sentimentoriented contextual advertising. Knowledge and Information Systems, 23(3):321–344. Hui Fang. 2008. A re-examination of query expansion using lexical resources. In ACL 2008, pages 139–147. Z. Harris. 1954. Distributional structure. Word, 10:146– 162. Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the semantic orientation of adjectives. In ACL 1997, pages 174–181. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In KDD 2004, pages 168–177. Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In ECML 1998, pages 137–142. Yue Lu, ChengXiang Zhai, and Neel Sundaresan. 2009. Rated aspect summarization of short comments. In WWW 2009, pages 131–140. Andrew Y. Ng. 2004. Feature selection, l1 vs. l2 regularization, and rotational invariance. In ICML 2004. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In WWW 2010. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In EMNLP 2002, pages 79– 86. Patrick Pantel and Deepak Ravichandran. 2004. Automatically labeling semantic classes. In NAACLHLT’04, pages 321 – 328. Sunita Sarawagi and Alok Kirpal. 2004. Efficient set joins on similarity predicates. In SIGMOD ’04, pages 743–754. 140 Dou Shen, Jianmin Wu, Bin Cao, Jian-Tao Sun, Qiang Yang, Zheng Chen, and Ying Li. 2009. Exploiting term relationship to boost text classification. In CIKM’09, pages 1637 – 1640. Peter D. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In ACL 2002, pages 417–424. Janyce M. Wiebe. 2000. Learning subjective adjective from corpora. In AAAI 2000, pages 735–740. 141
2011
14
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1395–1404, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Language-independent Compound Splitting with Morphological Operations Klaus Macherey1 Andrew M. Dai2 David Talbot1 Ashok C. Popat1 Franz Och1 1Google Inc. 1600 Amphitheatre Pkwy. Mountain View, CA 94043, USA {kmach,talbot,popat,och}@google.com 2University of Edinburgh 10 Crichton Street Edinburgh, UK EH8 9AB [email protected] Abstract Translating compounds is an important problem in machine translation. Since many compounds have not been observed during training, they pose a challenge for translation systems. Previous decompounding methods have often been restricted to a small set of languages as they cannot deal with more complex compound forming processes. We present a novel and unsupervised method to learn the compound parts and morphological operations needed to split compounds into their compound parts. The method uses a bilingual corpus to learn the morphological operations required to split a compound into its parts. Furthermore, monolingual corpora are used to learn and filter the set of compound part candidates. We evaluate our method within a machine translation task and show significant improvements for various languages to show the versatility of the approach. 1 Introduction A compound is a lexeme that consists of more than one stem. Informally, a compound is a combination of two or more words that function as a single unit of meaning. Some compounds are written as space-separated words, which are called open compounds (e.g. hard drive), while others are written as single words, which are called closed compounds (e.g. wallpaper). In this paper, we shall focus only on closed compounds because open compounds do not require further splitting. The objective of compound splitting is to split a compound into its corresponding sequence of constituents. If we look at how compounds are created from lexemes in the first place, we find that for some languages, compounds are formed by concatenating existing words, while in other languages compounding additionally involves certain morphological operations. These morphological operations can become very complex as we illustrate in the following case studies. 1.1 Case Studies Below, we look at splitting compounds from 3 different languages. The examples introduce in part the notation used for the decision rule outlined in Section 3.1. 1.1.1 English Compound Splitting The word flowerpot can appear as a closed or open compound in English texts. To automatically split the closed form we have to try out every split point and choose the split with minimal costs according to a cost function. Let's assume that we already know that flowerpot must be split into two parts. Then we have to position two split points that mark the end of each part (one is always reserved for the last character position). The number of split points is denoted by K (i.e. K = 2), while the position of split points is denoted by n1 and n2. Since flowerpot consists of 9 characters, we have 8 possibilities to position split point n1 within the characters c1, . . . , c8. The final split point corresponds with the last character, that is, n2 = 9. Trying out all possible single splits results in the following candidates: flowerpot →f + lowerpot flowerpot →fl + owerpot ... flowerpot →flower + pot ... flowerpot →flowerpo + t 1395 If we associate each compound part candidate with a cost that reflects how frequent this part occurs in a large collection of English texts, we expect that the correct split flower + pot will have the lowest cost. 1.1.2 German Compound Splitting The previous example covered a case where the compound is constructed by directly concatenating the compound parts. While this works well for English, other languages require additional morphological operations. To demonstrate, we look at the German compound Verkehrszeichen (traffic sign) which consists of the two nouns Verkehr (traffic) and Zeichen (sign). Let's assume that we want to split this word into 3 parts, that is, K = 3. Then, we get the following candidates. Verkehrszeichen →V + e + rkehrszeichen Verkehrszeichen →V + er + kehrszeichen ... Verkehrszeichen →Verkehr + s + zeichen ... Verkehrszeichen →Verkehrszeich + e + n Using the same procedure as described before, we can lookup the compound parts in a dictionary or determine their frequency from large text collections. This yields the optimal split points n1 = 7, n2 = 8, n3 = 15. The interesting part here is the additional s morpheme, which is called a linking morpheme, because it combines the two compound parts to form the compound Verkehrszeichen. If we have a list of all possible linking morphemes, we can hypothesize them between two ordinary compound parts. 1.1.3 Greek Compound Splitting The previous example required the insertion of a linking morpheme between two compound parts. We shall now look at a more complicated morphological operation. The Greek compound χαρτόκουτο (cardboard box) consists of the two parts χαρτί (paper) and κουτί (box). Here, the problem is that the parts χαρτό and κουτο are not valid words in Greek. To lookup the correct words, we must substitute the suffix of the compound part candidates with some other morphemes. If we allow the compound part candidates to be transformed by some morphological operation, we can lookup the transformed compound parts in a dictionary or determine their frequencies in some large collection of Greek texts. Let's assume that we need only one split point. Then this yields the following compound part candidates: χαρτόκουτο →χ + αρτόκουτο χαρτόκουτο →χ + αρτίκουτο g2 : ό / ί χαρτόκουτο →χ + αρτόκουτί g2 : ο / ί ... χαρτόκουτο →χαρτί + κουτί g1 : ό / ί , g2 : ο / ί ... χαρτόκουτο →χαρτίκουτ + ο g1 : ό / ί χαρτόκουτο →χαρτίκουτ + ί g2 : ο / ί Here, gk : s/t denotes the kth compound part which is obtained by replacing string s with string t in the original string, resulting in the transformed part gk. 1.2 Problems and Objectives Our goal is to design a language-independent compound splitter that is useful for machine translation. The previous examples addressed the importance of a cost function that favors valid compound parts versus invalid ones. In addition, the examples have shown that, depending on the language, the morphological operations can become very complex. For most Germanic languages like Danish, German, or Swedish, the list of possible linking morphemes is rather small and can be provided manually. However, in general, these lists can become very large, and language experts who could provide such lists might not be at our disposal. Because it seems infeasible to list the morphological operations explicitly, we want to find and extract those operations automatically in an unsupervised way and provide them as an additional knowledge source to the decompounding algorithm. Another problem is how to evaluate the quality of the compound splitter. One way is to compile for every language a large collection of compounds together with their valid splits and to measure the proportion of correctly split compounds. Unfortunately, such lists do not exist for many languages. 1396 While the training algorithm for our compound splitter shall be unsupervised, the evaluation data needs to be verified by human experts. Since we are interested in improving machine translation and to circumvent the problem of explicitly annotating compounds, we evaluate the compound splitter within a machine translation task. By decompounding training and test data of a machine translation system, we expect an increase in the number of matching phrase table entries, resulting in better translation quality measured in BLEU score (Papineni et al., 2002). If BLEU score is sensitive enough to measure the quality improvements obtained from decompounding, there is no need to generate a separate gold standard for compounds. Finally, we do not want to split non-compounds and named entities because we expect them to be translated non-compositionally. For example, the German word Deutschland (Germany) could be split into two parts Deutsch (German) + Land (country). Although this is a valid split, named entities should be kept as single units. An example for a non-compound is the German participle vereinbart (agreed) which could be wrongly split into the parts Verein (club) + Bart (beard). To avoid overly eager splitting, we will compile a list of non-compounds in an unsupervised way that serves as an exception list for the compound splitter. To summarize, we aim to solve the following problems: • Define a cost function that favors valid compound parts and rejects invalid ones. • Learn morphological operations, which is important for languages that have complex compound forming processes. • Apply compound splitting to machine translation to aid in translation of compounds that have not been seen in the bilingual training data. • Avoid splitting non-compounds and named entities as this may result in wrong translations. 2 Related work Previous work concerning decompounding can be divided into two categories: monolingual and bilingual approaches. Brown (2002) describes a corpus-driven approach for splitting compounds in a German-English translation task derived from a medical domain. A large proportion of the tokens in both texts are cognates with a Latin or Greek etymological origin. While the English text keeps the cognates as separate tokens, they are combined into compounds in the German text. To split these compounds, the author compares both the German and the English cognates on a character level to find reasonable split points. The algorithm described by the author consists of a sequence of if-then-else conditions that are applied on the two cognates to find the split points. Furthermore, since the method relies on finding similar character sequences between both the source and the target tokens, the approach is restricted to cognates and cannot be applied to split more complex compounds. Koehn and Knight (2003) present a frequencybased approach to compound splitting for German. The compound parts and their frequencies are estimated from a monolingual corpus. As an extension to the frequency approach, the authors describe a bilingual approach where they use a dictionary extracted from parallel data to find better split options. The authors allow only two linking morphemes between compound parts and a few letters that can be dropped. In contrast to our approach, those operations are not learned automatically, but must be provided explicitly. Garera and Yarowsky (2008) propose an approach to translate compounds without the need for bilingual training texts. The compound splitting procedure mainly follows the approach from (Brown, 2002) and (Koehn and Knight, 2003), so the emphasis is put on finding correct translations for compounds. To accomplish this, the authors use crosslanguage compound evidence obtained from bilingual dictionaries. In addition, the authors describe a simple way to learn glue characters by allowing the deletion of up to two middle and two end characters.1 More complex morphological operations are not taken into account. Alfonseca et al. (2008b) describe a state-of-theart German compound splitter that is particularly robust with respect to noise and spelling errors. The compound splitter is trained on monolingual data. Besides applying frequency and probability-based methods, the authors also take the mutual information of compound parts into account. In addition, the 1However, the glue characters found by this procedure seem to be biased for at least German and Albanian. A very frequent glue morpheme like -es- is not listed, while glue morphemes like -k- and -h- rank very high, although they are invalid glue morphemes for German. Albanian shows similar problems. 1397 authors look for compound parts that occur in different anchor texts pointing to the same document. All these signals are combined and the weights are trained using a support vector machine classifier. Alfonseca et al. (2008a) apply this compound splitter on various other Germanic languages. Dyer (2009) applies a maximum entropy model of compound splitting to generate segmentation lattices that serve as input to a translation system. To train the model, reference segmentations are required. Here, we produce only single best segmentations, but otherwise do not rely on reference segmentations. 3 Compound Splitting Algorithm In this section, we describe the underlying optimization problem and the algorithm used to split a token into its compound parts. Starting from Bayes' decision rule, we develop the Bellman equation and formulate a dynamic programming-based algorithm that takes a word as input and outputs the constituent compound parts. We discuss the procedure used to extract compound parts from monolingual texts and to learn the morphological operations using bilingual corpora. 3.1 Decision Rule for Compound Splitting Given a token w = c1, . . . , cN = cN 1 consisting of a sequence of N characters ci, the objective function is to find the optimal number ˆK and sequence of split points ˆn ˆ K 0 such that the subwords are the constituents of the token, where2 n0 := 0 and nK := N: w = cN 1 →( ˆK, ˆn ˆ K 0 ) = = arg max K,nK 0 { Pr(cN 1 , K, nK 0 ) } (1) = arg max K,nK 0 { Pr(K) · Pr(cN 1 , nK 0 |K) } ≊arg max K,nK 0 { p(K) · K ∏ k=1 p(cnk nk−1+1, nk−1|K)· ·p(nk|nk−1, K)} (2) with p(n0) = p(nK|·) ≡1. Equation 2 requires that token w can be fully decomposed into a sequence 2For algorithmic reasons, we use the start position 0 to represent a fictitious start symbol before the first character of the word. of lexemes, the compound parts. Thus, determining the optimal segmentation is sufficient for finding the constituents. While this may work for some languages, the subwords are not valid words in general as discussed in Section 1.1.3. Therefore, we allow the lexemes to be the result of a transformation process, where the transformed lexemes are denoted by gK 1 . This leads to the following refined decision rule: w = cN 1 →( ˆK, ˆn ˆ K 0 , ˆg ˆ K 1 ) = = arg max K,nK 0 ,gK 1 { Pr(cN 1 , K, nK 0 , gK 1 ) } (3) = arg max K,nK 0 ,gK 1 { Pr(K) · Pr(cN 1 , nK 0 , gK 1 |K) } (4) ≊arg max K,nK 0 ,gK 1 { p(K) · K ∏ k=1 p(cnk nk−1+1, nk−1, gk|K) | {z } compound part probability · · p(nk|nk−1, K) } (5) The compound part probability is a zero-order model. If we penalize each split with a constant split penalty ξ, and make the probability independent of the number of splits K, we arrive at the following decision rule: w = cN 1 →( ˆK, ˆn ˆ K 1 , ˆg ˆ K 1 ) = arg max K,nK 0 ,gK 1 { ξK · K ∏ k=1 p(cnk nk−1+1, nk−1, gk) } (6) 3.2 Dynamic Programming We use dynamic programming to find the optimal split sequence. Each split infers certain costs that are determined by a cost function. The total costs of a decomposed word can be computed from the individual costs of the component parts. For the dynamic programming approach, we define the following auxiliary function Q with nk = j: Q(cj 1) = max nk 0,gk 1 { ξk · k ∏ κ=1 p(cnκ nκ−1+1, nκ−1, gκ) } that is, Q(cj 1) is equal to the minimal costs (maximum probability) that we assign to the prefix string cj 1 where we have used k split points at positions nk 1. This yields the following recursive equation: Q(cj 1) = max nk,gk { ξ · Q(cnk−1 1 )· · p(cnk nk−1+1, nk−1, gk) } (7) 1398 Algorithm 1 Compound splitting Input: input word w = cN 1 Output: compound parts Q(0) = 0 Q(1) = · · · = Q(N) = ∞ for i = 0, . . . , N −1 do for j = i + 1, . . . , N do split-costs = Q(i) + cost(cj i+1, i, gj) + split-penalty if split-costs < Q(j) then Q(j) = split-costs B(j) = (i, gj) end if end for end for with backpointer B(j) = arg max nk,gk { ξ · Q(cnk−1 1 )· · p(cnk nk−1+1, nk−1, gk) } (8) Using logarithms in Equations 7 and 8, we can interpret the quantities as additive costs rather than probabilities. This yields Algorithm 1, which is quadratic in the length of the input string. By enforcing that each compound part does not exceed a predefined constant length ℓ, we can change the second for loop as follows: for j = i + 1, . . . , min(i + ℓ, N) do With this change, Algorithm 1 becomes linear in the length of the input word, O(|w|). 4 Cost Function and Knowledge Sources The performance of Algorithm 1 depends on the cost function cost(·), that is, the probability p(cnk nk−1+1, nk−1, gk). This cost function incorporates knowledge about morpheme transformations, morpheme positions within a compound part, and the compound parts themselves. 4.1 Learning Morphological Operations using Phrase Tables Let s and t be strings of the (source) language alphabet A. A morphological operation s/t is a pair of strings s, t ∈A∗, where s is replaced by t. With the usual definition of the Kleene operator ∗, s and t can be empty, denoted by ε. An example for such a pair is ε/es, which models the linking morpheme es in the German compound Bundesagentur (federal agency): Bundesagentur →Bund + es + Agentur . Note that by replacing either s or t with ε, we can model insertions or deletions of morphemes. The explicit dependence on position nk−1 in Equation 6 allows us to determine if we are at the beginning, in the middle, or at the end of a token. Thus, we can distinguish between start, middle, or end morphemes and hypothesize them during search.3 Although not explicitly listed in Algorithm 1, we disallow sequences of linking morphemes. This can be achieved by setting the costs to infinity for those morpheme hypotheses, which directly succeed another morpheme hypothesis. To learn the morphological operations involved in compounding, we determine the differences between a compound and its compound parts. This can be done by computing the Levenshtein distance between the compound and its compound parts, with the allowable edit operations being insertion, deletion, or substitution of one or more characters. If we store the current and previous characters, edit operation and the location (prefix, infix or suffix) at each position during calculation of the Levenshtein distance then we can obtain the morphological operations required for compounding. Applying the inverse operations, that is, replacing t with s yields the operation required for decompounding. 4.1.1 Finding Compounds and their Parts To learn the morphological operations, we need compounds together with their compound parts. The basic idea of finding compound candidates and their compound parts in a bilingual setting are related to the ideas presented in (Garera and Yarowsky, 2008). Here, we use phrase tables rather than dictionaries. Although phrase tables might contain more noise, we believe that overall phrase tables cover more phenomena of translations than what can be found in dictionaries. The procedure is as follows. We are given a phrase table that provides translations for phrases from a source language l into English and from English into l. Under the assumption that English does not contain many closed compounds, we can search 3We jointly optimize over K and the split points nk, so we know that cnK nK−1 is a suffix of w. 1399 the phrase table for those single-token source words f in language l, which translate into multi-token English phrases e1, . . . , en for n > 1. This results in a list of (f; e1, . . . , en) pairs, which are potential compound candidates together with their English translations. If for each pair, we take each token ei from the English (multi-token) phrase and lookup the corresponding translation for language l to get gi, we should find entries that have at least some partial match with the original source word f, if f is a true compound. Because the translation phrase table was generated automatically during the training of a multi-language translation system, there is no guarantee that the original translations are correct. Thus, the bilingual extraction procedure is subject to introduce a certain amount of noise. To mitigate this, thresholds such as minimum edit distance between the potential compound and its parts, minimum co-occurrence frequencies for the selected bilingual phrase pairs and minimum source and target word lengths are used to reduce the noise at the expense of finding fewer compounds. Those entries that obey these constraints are output as triples of form: (f; e1, . . . , en; g1, . . . , gn) (9) where • f is likely to be a compound, • e1, . . . , en is the English translation, and • g1, . . . , gn are the compound parts of f. The following example for German illustrates the process. Suppose that the most probable translation for Überweisungsbetrag is transfer amount using the phrase table. We then look up the translation back to German for each translated token: transfer translates to Überweisung and amount translates to Betrag. We then calculate the distance between all permutations of the parts and the original compound and choose the one with the lowest distance and highest translation probability: Überweisung Betrag. 4.2 Monolingual Extraction of Compound Parts The most important knowledge source required for Algorithm 1 is a word-frequency list of compound parts that is used to compute the split costs. The procedure described in Section 4.1.1 is useful for learning morphological operations, but it is not sufficient to extract an exhaustive list of compound parts. Such lists can be extracted from monolingual data for which we use language model (LM) word frequency lists in combination with some filter steps. The extraction process is subdivided into 2 passes, one over a high-quality news LM to extract the parts and the other over a web LM to filter the parts. 4.2.1 Phase 1: Bootstrapping pass In the first pass, we generate word frequency lists derived from news articles for multiple languages. The motivation for using news articles rather than arbitrary web texts is that news articles are in general less noisy and contain fewer spelling mistakes. The language-dependent word frequency lists are filtered according to a sequence of filter steps. These filter steps include discarding all words that contain digits or punctuations other than hyphen, minimum occurrence frequency, and a minimum length which we set to 4. The output is a table that contains preliminary compound parts together with their respective counts for each language. 4.2.2 Phase 2: Filtering pass In the second pass, the compound part vocabulary is further reduced and filtered. We generate a LM vocabulary based on arbitrary web texts for each language and build a compound splitter based on the vocabulary list that was generated in phase 1. We now try to split every word of the web LM vocabulary based on the compound splitter model from phase 1. For the compound parts that occur in the compound splitter output, we determine how often each compound part was used and output only those compound parts whose frequency exceed a predefined threshold n. 4.3 Example Suppose we have the following word frequencies output from pass 1: floor 10k poll 4k flow 9k pot 5k flower 15k potter 20k In pass 2, we observe the word flowerpot. With the above list, the only compound parts used are flower and pot. If we did not split any other words and threshold at n = 1, our final list would consist of flower and pot. This filtering pass has the advantage of outputting only those compound part candidates 1400 which were actually used to split words from web texts. The thresholding also further reduces the risk of introducing noise. Another advantage is that since the set of parts output in the first pass may contain a high number of compounds, the filter is able to remove a large number of these compounds by examining relative frequencies. In our experiments, we have assumed that compound part frequencies are higher than the compound frequency and so remove words from the part list that can themselves be split and have a relatively high frequency. Finally, after removing the low frequency compound parts, we obtain the final compound splitter vocabulary. 4.4 Generating Exception Lists To avoid eager splitting of non-compounds and named entities, we use a variant of the procedure described in Section 4.1.1. By emitting all those source words that translate with high probability into singletoken English words, we obtain a list of words that should not be split.4 4.5 Final Cost Function The final cost function is defined by the following components which are combined log-linearly. • The split penalty ξ penalizes each compound part to avoid eager splitting. • The cost for each compound part gk is computed as −log C(gk), where C(gk) is the unigram count for gk obtained from the news LM word frequency list. Since we use a zero-order model, we can ignore the normalization and work with unigram counts rather than unigram probabilities. • Because Algorithm 1 iterates over the characters of the input token w, we can infer from the boundaries (i, j) if we are at the start, in the middle, or at the end of the token. Applying a morphological operation adds costs 1 to the overall costs. Although the cost function is language dependent, we use the same split penalty weight ξ = 20 for all languages except for German, where the split penalty weight is set to 13.5. 5 Results To show the language independence of the approach within a machine translation task, we translate from languages belonging to different language families into English. The publicly available Europarl corpus is not suitable for demonstrating the utility of compound splitting because there are few unseen compounds in the test section of the Europarl corpus. The WMT shared translation task has a broader domain compared to Europarl but covers only a few languages. Hence, we present results for GermanEnglish using the WMT-07 data and cover other languages using non-public corpora which contain news as well as open-domain web texts. Table 1 lists the various corpus statistics. The source languages are grouped according to their language family. For learning the morphological operations, we allowed the substitution of at most 2 consecutive characters. Furthermore, we only allowed at most one morphological substitution to avoid introducing too much noise. The found morphological operations were sorted according to their frequencies. Those which occurred less than 100 times were discarded. Examples of extracted morphological operations are given in Table 2. Because the extraction procedure described in Section 4.1 is not purely restricted to the case of decompounding, we found that many morphological operations emitted by this procedure reflect morphological variations that are not directly linked to compounding, but caused by inflections. To generate the language-dependent lists of compound parts, we used language model vocabulary lists5 generated from news texts for different languages as seeds for the first pass. These lists were filtered by discarding all entries that either contained digits, punctuations other than hyphens, or sequences of the same characters. In addition, the infrequent entries were discarded as well to further reduce noise. For the second pass, we used the lists generated in the first pass together with the learned morphological operations to construct a preliminary compound splitter. We then generated vocabulary lists for monolingual web texts and applied the preliminary compound splitter onto this list. The used 4Because we will translate only into English, this is not an issue for the introductory example flowerpot. 5The vocabulary lists also contain the word frequencies. We use the term vocabulary list synonymously for a word frequency list. 1401 Family Src Language #Tokens Train src/trg #Tokens Dev src/trg #Tokens Tst src/trg Germanic Danish 196M 201M 43, 475 44, 479 72, 275 74, 504 German 43M 45M 23, 151 22, 646 45, 077 43, 777 Norwegian 251M 255M 42, 096 43, 824 70, 257 73, 556 Swedish 201M 213M 42, 365 44, 559 70, 666 74, 547 Hellenic Greek 153M 148M 47, 576 44, 658 79, 501 74, 776 Uralic Estonian 199M 244M 34, 987 44, 658 57, 916 74, 765 Finnish 205M 246M 32, 119 44, 658 53, 365 74, 771 Table 1: Corpus statistics for various language pairs. The target language is always English. The source languages are grouped according to their language family. Language morpholog. operations Danish -/ε, s/ε German -/ε, s/ε, es/ε, n/ε, e/ε, en/ε Norwegian -/ε, s/ε, e/ε Swedish -/ε, s/ε Greek ε/α, ε/ς, ε/η, ο/ί, ο/ί, ο/ν Estonian -/ε, e/ε, se/ε Finnish ε/n, n/ε, ε/en Table 2: Examples of morphological operations that were extracted from bilingual corpora. compound parts were collected and sorted according to their frequencies. Those which were used at least 2 times were kept in the final compound parts lists. Table 3 reports the number of compound parts kept after each pass. For example, the Finnish news vocabulary list initially contained 1.7M entries. After removing non-alpha and infrequent words in the first filter step, we obtained 190K entries. Using the preliminary compound splitter in the second filter step resulted in 73K compound part entries. The finally obtained compound splitter was integrated into the preprocessing pipeline of a stateof-the-art statistical phrase-based machine translation system that works similar to the Moses decoder (Koehn et al., 2007). By applying the compound splitter during both training and decoding we ensured that source language tokens were split in the same way. Table 4 presents results for various language-pairs with and without decompounding. Both the Germanic and the Uralic languages show significant BLEU score improvements of 1.3 BLEU points on average. The confidence intervals were computed using the bootstrap resampling normal approximation method described in (Noreen, 1989). While the compounding process for Germanic languages is rather simple and requires only a few linking morphemes, compounds used in Uralic languages have a richer morphology. In contrast to the Germanic and Uralic languages, we did not observe improvements for Greek. To investigate this lack of performance, we turned off transliteration and kept unknown source words in their original script. We analyzed the number of remaining source characters in the baseline system and the system using compound splitting by counting the number of Greek characters in the translation output. The number of remaining Greek characters in the translation output was reduced from 6, 715 in the baseline system to 3, 624 in the system which used decompounding. In addition, a few other metrics like the number of source words that consisted of more than 15 characters decreased as well. Because we do not know how many compounds are actually contained in the Greek source sentences6 and because the frequency of using compounds might vary across languages, we cannot expect the same performance gains across languages belonging to different language families. An interesting observation is, however, that if one language from a language family shows performance gains, then there are performance gains for all the languages in that family. We also investigated the effect of not using any morphological operations. Disallowing all morphological operations accounts for a loss of 0.1 - 0.2 BLEU points across translation systems and increases the compound parts vocabulary lists by up to 20%, which means that most of the gains can be achieved with simple concatenation. The exception lists were generated according to the procedure described in Section 4.4. Since we aimed for precision rather than recall when constructing these lists, we inserted only those source 6Quite a few of the remaining Greek characters belong to rare named entities. 1402 Language initial vocab size #parts after 1st pass #parts after 2nd pass Danish 918, 708 132, 247 49, 592 German 7, 908, 927 247, 606 45, 059 Norwegian 1, 417, 129 237, 099 62, 107 Swedish 1, 907, 632 284, 660 82, 120 Greek 877, 313 136, 436 33, 130 Estonian 742, 185 81, 132 36, 629 Finnish 1, 704, 415 190, 507 73, 568 Table 3: Number of remaining compound parts for various languages after the first and second filter step. System BLEU[%] w/o splitting BLEU[%] w splitting ∆ CI 95% Danish 42.55 44.39 1.84 (± 0.65) German WMT-07 25.76 26.60 0.84 (± 0.70) Norwegian 42.77 44.58 1.81 (± 0.64) Swedish 36.28 38.04 1.76 (± 0.62) Greek 31.85 31.91 0.06 (± 0.61) Estonian 20.52 21.20 0.68 (± 0.50) Finnish 25.24 26.64 1.40 (± 0.57) Table 4: BLEU score results for various languages translated into English with and without compound splitting. Language Split source translation German no Die EU ist nicht einfach ein Freundschaftsclub. The EU is not just a Freundschaftsclub. yes Die EU ist nicht einfach ein Freundschaft Club The EU is not simply a friendship club. Greek no Τι είναι παλμοκωδική διαμόρφωση; What παλμοκωδική configuration? yes Τι είναι παλμο κωδικη διαμόρφωση; What is pulse code modulation? Finnish no Lisävuodevaatteet ja pyyheliinat ovat kaapissa. Lisävuodevaatteet and towels are in the closet. yes Lisä Vuode Vaatteet ja pyyheliinat ovat kaapissa. Extra bed linen and towels are in the closet. Table 5: Examples of translations into English with and without compound splitting. words whose co-occurrence count with a unigram translation was at least 1, 000 and whose translation probability was larger than 0.1. Furthermore, we required that at least 70% of all target phrase entries for a given source word had to be unigrams. All decompounding results reported in Table 4 were generated using these exception lists, which prevented wrong splits caused by otherwise overly eager splitting. 6 Conclusion and Outlook We have presented a language-independent method for decompounding that improves translations for compounds that otherwise rarely occur in the bilingual training data. We learned a set of morphological operations from a translation phrase table and determined suitable compound part candidates from monolingual data in a two pass process. This allowed us to learn morphemes and operations for languages where these lists are not available. In addition, we have used the bilingual information stored in the phrase table to avoid splitting non-compounds as well as frequent named entities. All knowledge sources were combined in a cost function that was applied in a compound splitter based on dynamic programming. Finally, we have shown this improves translation performance on languages from different language families. The weights were not optimized in a systematic way but set manually to their respective values. In the future, the weights of the cost function should be learned automatically by optimizing an appropriate error function. Instead of using gold data, the development data for optimizing the error function could be collected without supervision using the methods proposed in this paper. 1403 References Enrique Alfonseca, Slaven Bilac, and Stefan Paries. 2008a. Decompounding query keywords from compounding languages. In Proc. of the 46th Annual Meeting of the Association for Computational Linguistics (ACL): Human Language Technologies (HLT), pages 253--256, Columbus, Ohio, USA, June. Enrique Alfonseca, Slaven Bilac, and Stefan Paries. 2008b. German decompounding in a difficult corpus. In A. Gelbukh, editor, Lecture Notes in Computer Science (LNCS): Proc. of the 9th Int. Conf. on Intelligent Text Processing and Computational Linguistics (CICLING), volume 4919, pages 128--139. Springer Verlag, February. Ralf D. Brown. 2002. Corpus-Driven Splitting of Compound Words. In Proc. of the 9th Int. Conf. on Theoretical and Methodological Issues in Machine Translation (TMI), pages 12--21, Keihanna, Japan, March. Chris Dyer. 2009. Using a maximum entropy model to build segmentation lattices for mt. In Proc. of the Human Language Technologies (HLT): The Annual Conf. of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 406--414, Boulder, Colorado, June. Nikesh Garera and David Yarowsky. 2008. Translating Compounds by Learning Component Gloss Translation Models via Multiple Languages. In Proc. of the 3rd Internation Conference on Natural Language Processing (IJCNLP), pages 403--410, Hyderabad, India, January. Philipp Koehn and Kevin Knight. 2003. Empirical methods for compound splitting. In Proc. of the 10th Conf. of the European Chapter of the Association for Computational Linguistics (EACL), volume 1, pages 187--193, Budapest, Hungary, April. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. of the 44th Annual Meeting of the Association for Computational Linguistics (ACL), volume 1, pages 177--180, Prague, Czech Republic, June. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses. John Wiley & Sons, Canada. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a Method for Automatic Evaluation of Machine Translation. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311--318, Philadelphia, Pennsylvania, July. 1404
2011
140
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1405–1414, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Parsing the Internal Structure of Words: A New Paradigm for Chinese Word Segmentation Zhongguo Li State Key Laboratory on Intelligent Technology and Systems Tsinghua National Laboratory for Information Science and Technology Department of Computer Science and Technology Tsinghua University, Beijing 100084, China [email protected] Abstract Lots of Chinese characters are very productive in that they can form many structured words either as prefixes or as suffixes. Previous research in Chinese word segmentation mainly focused on identifying only the word boundaries without considering the rich internal structures of many words. In this paper we argue that this is unsatisfying in many ways, both practically and theoretically. Instead, we propose that word structures should be recovered in morphological analysis. An elegant approach for doing this is given and the result is shown to be promising enough for encouraging further effort in this direction. Our probability model is trained with the Penn Chinese Treebank and actually is able to parse both word and phrase structures in a unified way. 1 Why Parse Word Structures? Research in Chinese word segmentation has progressed tremendously in recent years, with state of the art performing at around 97% in precision and recall (Xue, 2003; Gao et al., 2005; Zhang and Clark, 2007; Li and Sun, 2009). However, virtually all these systems focus exclusively on recognizing the word boundaries, giving no consideration to the internal structures of many words. Though it has been the standard practice for many years, we argue that this paradigm is inadequate both in theory and in practice, for at least the following four reasons. The first reason is that if we confine our definition of word segmentation to the identification of word boundaries, then people tend to have divergent opinions as to whether a linguistic unit is a word or not (Sproat et al., 1996). This has led to many different annotation standards for Chinese word segmentation. Even worse, this could cause inconsistency in the same corpus. For instance, 䉂擌奒 ‘vice president’ is considered to be one word in the Penn Chinese Treebank (Xue et al., 2005), but is split into two words by the Peking University corpus in the SIGHAN Bakeoffs (Sproat and Emerson, 2003). Meanwhile, 䉂䀓惼‘vice director’ and 䉂 䚲䡮‘deputy manager’ are both segmented into two words in the same Penn Chinese Treebank. In fact, all these words are composed of the prefix 䉂‘vice’ and a root word. Thus the structure of 䉂擌奒‘vice president’ can be represented with the tree in Figure 1. Without a doubt, there is complete agreeNN ll , , JJf 䉂 NNf 擌奒 Figure 1: Example of a word with internal structure. ment on the correctness of this structure among native Chinese speakers. So if instead of annotating only word boundaries, we annotate the structures of every word, 1 then the annotation tends to be more 1Here it is necessary to add a note on terminology used in this paper. Since there is no universally accepted definition of the “word” concept in linguistics and especially in Chinese, whenever we use the term “word” we might mean a linguistic unit such as 䉂擌奒‘vice president’ whose structure is shown as the tree in Figure 1, or we might mean a smaller unit such as 擌奒‘president’ which is a substructure of that tree. Hopefully, 1405 consistent and there could be less duplication of efforts in developing the expensive annotated corpus. The second reason is applications have different requirements for granularity of words. Take the personal name 撱嗤吼‘Zhou Shuren’ as an example. It’s considered to be one word in the Penn Chinese Treebank, but is segmented into a surname and a given name in the Peking University corpus. For some applications such as information extraction, the former segmentation is adequate, while for others like machine translation, the later finer-grained output is more preferable. If the analyzer can produce a structure as shown in Figure 4(a), then every application can extract what it needs from this tree. A solution with tree output like this is more elegant than approaches which try to meet the needs of different applications in post-processing (Gao et al., 2004). The third reason is that traditional word segmentation has problems in handling many phenomena in Chinese. For example, the telescopic compound 㦌撥怂惆‘universities, middle schools and primary schools’ is in fact composed of three coordinating elements 㦌惆‘university’, 撥惆‘middle school’ and 怂惆‘primary school’. Regarding it as one flat word loses this important information. Another example is separable words like 扩扙‘swim’. With a linear segmentation, the meaning of ‘swimming’ as in 扩堑扙‘after swimming’ cannot be properly represented, since 扩扙‘swim’ will be segmented into discontinuous units. These language usages lie at the boundary between syntax and morphology, and are not uncommon in Chinese. They can be adequately represented with trees (Figure 2). (a) NNHHH    JJHHH    JJf 㦌 JJf 撥 JJf 怂 NNf 惆 (b) VVHHH    VV ZZ   VVf 扩 VVf 堑 NNf 扙 Figure 2: Example of telescopic compound (a) and separable word (b). The last reason why we should care about word the context will always make it clear what is being referred to with the term “word”. structures is related to head driven statistical parsers (Collins, 2003). To illustrate this, note that in the Penn Chinese Treebank, the word 戽䊂䠽吼‘English People’ does not occur at all. Hence constituents headed by such words could cause some difficulty for head driven models in which out-ofvocabulary words need to be treated specially both when they are generated and when they are conditioned upon. But this word is in turn headed by its suffix 吼‘people’, and there are 2,233 such words in Penn Chinese Treebank. If we annotate the structure of every compound containing this suffix (e.g. Figure 3), such data sparsity simply goes away. NN bb " " NRf 戽䊂䠽 NNf 吼 Figure 3: Structure of the out-of-vocabulary word 戽䊂 䠽吼‘English People’. Had there been only a few words with internal structures, current Chinese word segmentation paradigm would be sufficient. We could simply recover word structures in post-processing. But this is far from the truth. In Chinese there is a large number of such words. We just name a few classes of these words and give one example for each class (a dot is used to separate roots from affixes): personal name: 㡿増·揽‘Nagao Makoto’ location name: 凝挕·撲‘New York State’ noun with a suffix: 䆩䡡·勬‘classifier’ noun with a prefix: 敏·䧥䧥‘mother-to-be’ verb with a suffix: 敧䃄·䑺‘automatize’ verb with a prefix: 䆓·噙‘waterproof’ adjective with a suffix: 䉅䏜·怮‘composite’ adjective with a prefix: 䆚·搔喪‘informal’ pronoun with a prefix: 䊈·墠‘everybody’ time expression: 憘䛊䛊壊·兣‘the year 1995’ ordinal number: 䀱·喛憘‘eleventh’ retroflex suffixation: 䑳䃹·䄎‘flower’ This list is not meant to be complete, but we can get a feel of how extensive the words with non-trivial structures can be. With so many productive suffixes and prefixes, analyzing word structures in postprocessing is difficult, because a character may or may not act as an affix depending on the context. 1406 For example, the character 吼‘people’ in 撇嗤吼 ‘the one who plants’ is a suffix, but in the personal name 撱嗤吼‘Zhou Shuren’ it isn’t. The structures of these two words are shown in Figure 4. (a) NR ZZ   NFf 撱 NGf 嗤吼 (b) NN ZZ   VVf 撇嗤 NNf 吼 Figure 4: Two words that differ only in one character, but have different internal structures. The character 吼 ‘people’ is part of a personal name in tree (a), but is a suffix in (b). A second reason why generally we cannot recover word structures in post-processing is that some words have very complex structures. For example, the tree of 壃搕䈿擌懂揶‘anarchist’ is shown in Figure 5. Parsing this structure correctly without a principled method is difficult and messy, if not impossible. NNaaa ! ! ! NNHHH    VV ZZ   VVf 壃 NNf 搕䈿 NNf 擌懂 NNf 揶 Figure 5: An example word which has very complex structures. Finally, it must be mentioned that we cannot store all word structures in a dictionary, as the word formation process is very dynamic and productive in nature. Take 䌬‘hall’ as an example. Standard Chinese dictionaries usually contain 埣嗖䌬‘library’, but not many other words such as 䎰愒䌬‘aquarium’ generated by this same character. This is understandable since the character 䌬‘hall’ is so productive that it is impossible for a dictionary to list every word with this character as a suffix. The same thing happens for natural language processing systems. Thus it is necessary to have a dynamic mechanism for parsing word structures. In this paper, we propose a new paradigm for Chinese word segmentation in which not only word boundaries are identified but the internal structures of words are recovered (Section 3). To achieve this, we design a joint morphological and syntactic parsing model of Chinese (Section 4). Our generative story describes the complete process from sentence and word structures to the surface string of characters in a top-down fashion. With this probability model, we give an algorithm to find the parse tree of a raw sentence with the highest probability (Section 5). The output of our parser incorporates word structures naturally. Evaluation shows that the model can learn much of the regularity of word structures, and also achieves reasonable accuracy in parsing higher level constituent structures (Section 6). 2 Related Work The necessity of parsing word structures has been noticed by Zhao (2009), who presented a characterlevel dependency scheme as an alternative to the linear representation of words. Although our work is based on the same notion, there are two key differences. The first one is that part-of-speech tags and constituent labels are fundamental for our parsing model, while Zhao focused on unlabeled dependencies between characters in a word, and part-ofspeech information was not utilized. Secondly, we distinguish explicitly the generation of flat words such as 䑵喏䃮‘Washington’ and words with internal structures. Our parsing algorithm also has to be adapted accordingly. Such distinction was not made in Zhao’s parsing model and algorithm. Many researchers have also noticed the awkwardness and insufficiency of current boundary-only Chinese word segmentation paradigm, so they tried to customize the output to meet the requirements of various applications (Wu, 2003; Gao et al., 2004). In a related research, Jiang et al. (2009) presented a strategy to transfer annotated corpora between different segmentation standards in the hope of saving some expensive human labor. We believe the best solution to the problem of divergent standards and requirements is to annotate and analyze word structures. Then applications can make use of these structures according to their own convenience. 1407 Since the distinction between morphology and syntax in Chinese is somewhat blurred, our model for word structure parsing is integrated with constituent parsing. There has been many efforts to integrate Chinese word segmentation, part-of-speech tagging and parsing (Wu and Zixin, 1998; Zhou and Su, 2003; Luo, 2003; Fung et al., 2004). However, in these research all words were considered to be flat, and thus word structures were not parsed. This is a crucial difference with our work. Specifically, consider the word 碾碜扨‘olive oil’. Our parser output tree Figure 6(a), while Luo (2003) output tree (b), giving no hint to the structure of this word since the result is the same with a real flat word 䧢哫膝 ‘Los Angeles’(c). (a) NN ZZ   NNf 碾碜 NNf 扨 (b) NN NNf 碾碜扨 (c) NR NRf 䧢哫膝 Figure 6: Difference between our output (a) of parsing the word 碾碜扨‘olive oil’ and the output (b) of Luo (2003). In (c) we have a true flat word, namely the location name 䧢哫膝‘Los Angeles’. The benefits of joint modeling has been noticed by many. For example, Li et al. (2010) reported that a joint syntactic and semantic model improved the accuracy of both tasks, while Ng and Low (2004) showed it’s beneficial to integrate word segmentation and part-of-speech tagging into one model. The later result is confirmed by many others (Zhang and Clark, 2008; Jiang et al., 2008; Kruengkrai et al., 2009). Goldberg and Tsarfaty (2008) showed that a single model for morphological segmentation and syntactic parsing of Hebrew yielded an error reduction of 12% over the best pipelined models. This is because an integrated approach can effectively take into account more information from different levels of analysis. Parsing of Chinese word structures can be reduced to the usual constituent parsing, for which there has been great progress in the past several years. Our generative model for unified word and phrase structure parsing is a direct adaptation of the model presented by Collins (2003). Many other approaches of constituent parsing also use this kind of head-driven generative models (Charniak, 1997; Bikel and Chiang, 2000) . 3 The New Paradigm Given a raw Chinese sentence like 䤕撓䏓喴敯 䋳㢧喓, a traditional word segmentation system would output some result like 䤕撓䏓喴敯䋳㢧 喓(‘Lin Zhihao’, ‘is’, ‘chief engineer’). In our new paradigm, the output should at least be a linear sequence of trees representing the structures of each word as in Figure 7. NR QQ   NFf 䤕 NGf 撓䏓 VV VVf 喴 NNHHH    JJ JJf 敯 NN ZZ   NNf 䋳㢧 NNf 喓 Figure 7: Proposed output for the new Chinese word segmentation paradigm. Note that in the proposed output, all words are annotated with their part-of-speech tags. This is necessary since part-of-speech plays an important role in the generation of compound words. For example, 揶‘person’ usually combines with a verb to form a compound noun such as 唗䕏揶‘designer’. In this paper, we will actually design an integrated morphological and syntactical parser trained with a treebank. Therefore, the real output of our system looks like Figure 8. It’s clear that besides all SPPPP P      NP NR ZZ   NFf 䤕 NGf 撓䏓 VPaaa ! ! ! VV VVf 喴 NNHHH    JJ JJf 敯 NN ZZ   NNf 䋳㢧 NNf 喓 Figure 8: The actual output of our parser trained with a fully annotated treebank. the information of the proposed output for the new 1408 paradigm, our model’s output also includes higherlevel syntactic parsing results. 3.1 Training Data We employ a statistical model to parse phrase and word structures as illustrated in Figure 8. The currently available treebank for us is the Penn Chinese Treebank (CTB) 5.0 (Xue et al., 2005). Because our model belongs to the family of head-driven statistical parsing models (Collins, 2003), we use the headfinding rules described by Sun and Jurafsky (2004). Unfortunately, this treebank or any other treebanks for that matter, does not contain annotations of word structures. Therefore, we must annotate these structures by ourselves. The good news is that the annotation is not too complicated. First, we extract all words in the treebank and check each of them manually. Words with non-trivial structures are thus annotated. Finally, we install these small trees of words into the original treebank. Whether a word has structures or not is mostly context independent, so we only have to annotate each word once. There are two noteworthy issues in this process. Firstly, as we’ll see in Section 4, flat words and non-flat words will be modeled differently, thus it’s important to adapt the part-of-speech tags to facilitate this modeling strategy. For example, the tag for nouns is NN as in 憞䠮䞎‘Iraq’ and 卣敯埚‘former president’. After annotation, the former is flat, but the later has a structure (Figure 9). So we change the POS tag for flat nouns to NNf, then during bottom up parsing, whenever a new constituent ending with ‘f’ is found, we can assign it a probability in a way different from a structured word or phrase. Secondly, we should record the head position of each word tree in accordance with the requirements of head driven parsing models. As an example, the right tree in Figure 9 has the context free rule “NN →JJf NNf”, the head of which should be the rightmost NNf. Therefore, in 卣敯埚‘former president’ the head is 敯埚‘president’. In passing, the readers should note the fact that in Figure 9, we have to add a parent labeled NN to the flat word 憞䠮䞎‘Iraq’ so as not to change the context-free rules contained inherently in the original treebank. (a) NN NNf 憞䠮䞎 (b) NN ll , , JJf 卣 NNf 敯埚 Figure 9: Example word structure annotation. We add an ‘f’ to the POS tags of words with no further structures. 4 The Model Given an observed raw sentences S, our generative model tells a story about how this surface sequence of Chinese characters is generated with a linguistically plausible morphological and syntactical process, thereby defining a joint probability Pr(T, S) where T is a parse tree carrying word structures as well as phrase structures. With this model, the parsing problem is to search for the tree T ∗such that T ∗= arg max T Pr(T, S) (1) The generation of S is defined in a top down fashion, which can be roughly summarized as follows. First, the lexicalized constituent structures are generated, then the lexicalized structure of each word is generated. Finally, flat words with no structures are generated. As soon as this is done, we get a tree whose leaves are Chinese characters and can be concatenated to get the surface character sequence S. 4.1 Generation of Constituent Structures Each node in the constituent tree corresponds to a lexicalized context free rule P →Ln Ln−1 · · · L1HR1 R2 · · · Rm (2) where P, Li, Ri and H are lexicalized nonterminals and H is the head. To generate this constituent, first P is generated, then the head child H is generated conditioned on P, and finally each Li and Rj are generated conditioned on P and H and a distance metric. This breakdown of lexicalized PCFG rules is essentially the Model 2 defined by Collins (1999). We refer the readers to Collins’ thesis for further details. 1409 4.2 Generation of Words with Internal Structures Words with rich internal structures can be described using a context-free grammar formalism as word → root (3) word → word suffix (4) word → prefix word (5) Here the root is any word without interesting internal structures, and the prefixes and suffixes are not limited to single characters. For example, 擌懂‘ism’ as in 她㦓擌懂‘modernism’ is a well known and very productive suffix. Also, we can see that rules (4) and (5) are recursive and hence can handle words with very complex structures. By (3)–(5), the generation of word structures is exactly the same as that of ordinary phrase structures. Hence the probabilities of these words can be defined in the same way as higher level constituents in (2). Note that in our case, each word with structures is naturally lexicalized, since in the annotation process we have been careful to record the head position of each complex word. As an example, consider a word w = R(r) S(s) where R is the root part-of-speech headed by the word r, and S is the suffix part-of-speech headed by s. If the head of this word is its suffix, then we can define the probability of w by Pr(w) = Pr(S, s) · Pr(R, r|S, s) (6) This is equivalent to saying that to generate w, we first generate its head S(s), then conditioned on this head, other components of this word are generated. In actual parsing, because a word always occurs in some contexts, the above probability should also be conditioned on these contexts, such as its parent and the parent’s head word. 4.3 Generation of Flat Words We say a word is flat if it contains only one morpheme such as 憞䠮䞎‘Iraq’, or if it is a compound like 䝭䅵‘develop’ which does not have a productive component we are currently interested in. Depending on whether a flat word is known or not, their generative probabilities are computed also differently. Generation of flat words seen in training is trivial and deterministic since every phrase and word structure rules are lexicalized. However, the generation of unknown flat words is a different story. During training, words that occur less than 6 times are substituted with the symbol UNKNOWN. In testing, unknown words are generated after the generation of symbol UNKNOWN, and we define their probability by a first-order Markov model. That is, given a flat word w = c1c2 · · · cn not seen in training, we define its probability conditioned with the part-of-speech p as Pr(w|p) = n+1 Y i=1 Pr(ci|ci−1, p) (7) where c0 is taken to be a START symbol indicating the left boundary of a word and cn+1 is the STOP symbol to indicate the right boundary. Note that the generation of w is only conditioned on its part-ofspeech p, ignoring the larger constituent or word in which w occurs. We use a back-off strategy to smooth the probabilities in (7): ˜Pr(ci|ci−1, p) = λ1 · ˆPr(ci|ci−1, p) + λ2 · ˆPr(ci|ci−1) +λ3 · ˆPr(ci) (8) where λ1 + λ2 + λ3 = 1 to ensure the conditional probability is well formed. These λs will be estimated with held-out data. The probabilities on the right side of (8) can be estimated with simple counts: ˆPr(ci|ci−1, p) = COUNT(ci−1ci, p) COUNT(ci−1, p) (9) The other probabilities can be estimated in the same way. 4.4 Summary of the Generative Story We make a brief summary of our generative story for the integrated morphological and syntactic parsing model. For a sentence S and its parse tree T, if we denote the set of lexicalized phrase structures in T by C, the set of lexicalized word structures by W, and the set of unknown flat words by F, then the joint probability Pr(T, S) according to our model is Pr(T, S) = Y c∈C Pr(c) Y w∈W Pr(w) Y f∈F Pr(f) (10) 1410 In practice, the logarithm of this probability can be calculated instead to avoid numerical difficulties. 5 The Parsing Algorithm To find the parse tree with highest probability we use a chart parser adapted from Collins (1999). Two key changes must be made to the search process, though. Firstly, because we are proposing a new paradigm for Chinese word segmentation, the input to the parser must be raw sentences by definition. Hence to use the bottom-up parser, we need a lexicon of all characters together with what roles they can play in a flat word. We can get this lexicon from the treebank. For example, from the word 撥愊/NNf ‘center’, we can extract a role bNNf for character 撥 ‘middle’ and a role eNNf for character 愊‘center’. The role bNNf means the beginning of the flat label NNf, while eNNf stands for the end of the label NNf. This scheme was first proposed by Luo (2003) in his character-based Chinese parser, and we find it quite adequate for our purpose here. Secondly, in the bottom-up parser for head driven models, whenever a new edge is found, we must assign it a probability and a head word. If the newly discovered constituent is a flat word (its label ends with ‘f’), then we set its head word to be the concatenation of all its child characters, i.e. the word itself. If it is an unknown word, we use (7) to assign the probability, otherwise its probability is set to be 1. On the other hand, if the new edge is a phrase or word with internal structures, the probability is set according to (2), while the head word is found with the appropriate head rules. In this bottom-up way, the probability for a complete parse tree is known as soon as it is completed. This probability includes both word generation probabilities and constituent probabilities. 6 Evaluation For several reasons, it is a little tricky to evaluate the accuracy of our model for integrated morphological and syntactic parsing. First and foremost, we currently know of no other same effort in parsing the structures of Chinese words, and we have to annotate word structures by ourselves. Hence there is no baseline performance to compare with. Secondly, simply reporting the accuracy of labeled precision and recall is not very informative because our parser takes raw sentences as input, and its output includes a lot of easy cases like word segmentation and partof-speech tagging results. Despite these difficulties, we note that higherlevel constituent parsing results are still somewhat comparable with previous performance in parsing Penn Chinese Treebank, because constituent parsing does not involve word structures directly. Having said that, it must be pointed out that the comparison is meaningful only in a limited sense, as in previous literatures on Chinese parsing, the input is always word segmented or even part-of-speech tagged. That is, the bracketing in our case is around characters instead of words. Another observation is we can still evaluate Chinese word segmentation and partof-speech tagging accuracy, by reading off the corresponding result from parse trees. Again because we split the words with internal structures into their components, comparison with other systems should be viewed with that in mind. Based on these discussions, we divide the labels of all constituents into three categories: Phrase labels are the labels in Peen Chinese Treebank for nonterminal phrase structures, including NP, VP, PP, etc. POS labels represent part-of-speech tags such as NN, VV, DEG, etc. Flat labels are generated in our annotation for words with no interesting structures. Recall that they always end with an ‘f’ such as NNf, VVf and DEGf, etc. With this classification, we report our parser’s accuracy for phrase labels, which is approximately the accuracy of constituent parsing of Penn Chinese Treebank. We report our parser’s word segmentation accuracy based on the flat labels. This accuracy is in fact the joint accuracy of segmentation and part-of-speech tagging. Most importantly, we can report our parser’s accuracy in recovering word structures based on POS labels and flat labels, since word structures may contain only these two kinds of labels. With the standard split of CTB 5.0 data into training, development and test sets (Zhang and Clark, 1411 2009), the result are summarized in Table 1. For all label categories, the PARSEEVAL measures (Abney et al., 1991) are used in computing the labeled precision and recall. Types LP LR F1 Phrase 79.3 80.1 79.7 Flat 93.2 93.8 93.5 Flat* 97.1 97.6 97.3 POS & Flat 92.7 93.2 92.9 Table 1: Labeled precision and recall for the three types of labels. The line labeled ‘Flat*’ is for unlabeled metrics of flat words, which is effectively the ordinary word segmentation accuracy. Though not directly comparable, we can make some remarks to the accuracy of our model. For constituent parsing, the best result on CTB 5.0 is reported to be 78% F1 measure for unlimited sentences with automatically assigned POS tags (Zhang and Clark, 2009). Our result for phrase labels is close to this accuracy. Besides, the result for flat labels compares favorably with the state of the art accuracy of about 93% F1 for joint word segmentation and part-of-speech tagging (Jiang et al., 2008; Kruengkrai et al., 2009). For ordinary word segmentation, the best result is reported to be around 97% F1 on CTB 5.0 (Kruengkrai et al., 2009), while our parser performs at 97.3%, though we should remember that the result concerns flat words only. Finally, we see the performance of word structure recovery is almost as good as the recognition of flat words. This means that parsing word structures accurately is possible with a generative model. It is interesting to see how well the parser does in recognizing the structure of words that were not seen during training. For this, we sampled 100 such words including those with prefixes or suffixes and personal names. We found that for 82 of these words, our parser can correctly recognize their structures. This means our model has learnt something that generalizes well to unseen words. In error analysis, we found that the parser tends to over generalize for prefix and suffix characters. For example, 㦌斊䕛‘great writer’ is a noun phrase consisting of an adjective 㦌‘great’ and a noun 斊䕛 ‘writer’, as shown in Figure 10(a), but our parser incorrectly analyzed it into a root 㦌斊‘masterpiece’ and a suffix 䕛‘expert’, as in Figure 10(b). This (a) NP ll , , JJ JJf 㦌 NN NNf 斊䕛 (b) NN ZZ   NNf 㦌斊 NNf 䕛 Figure 10: Example of parser error. Tree (a) is correct, and (b) is the wrong result by our parser. is because the character 䕛‘expert’ is a very productive suffix, as in 䑺惆䕛‘chemist’ and 堉䘂䕛 ‘diplomat’. This observation is illuminating because most errors of our parser follow this pattern. Currently we don’t have any non-ad hoc way of preventing such kind of over generalization. 7 Conclusion and Discussion In this paper we proposed a new paradigm for Chinese word segmentation in which not only flat words were identified but words with structures were also parsed. We gave good reasons why this should be done, and we presented an effective method showing how this could be done. With the progress in statistical parsing technology and the development of large scale treebanks, the time has now come for this paradigm shift to happen. We believe such a new paradigm for word segmentation is linguistically justified and pragmatically beneficial to real world applications. We showed that word structures can be recovered with high precision, though there’s still much room for improvement, especially for higher level constituent parsing. Our model is generative, but discriminative models such as maximum entropy technique (Berger et al., 1996) can be used in parsing word structures too. Many parsers using these techniques have been proved to be quite successful (Luo, 2003; Fung et al., 2004; Wang et al., 2006). Another possible direction is to combine generative models with discriminative reranking to enhance the accuracy (Collins and Koo, 2005; Charniak and Johnson, 2005). Finally, we must note that the use of flat labels such as “NNf” is less than ideal. The most impor1412 tant reason these labels are used is we want to compare the performance of our parser with previous results in constituent parsing, part-of-speech tagging and word segmentation, as we did in Section 6. The problem with this approach is that word structures and phrase structures are then not treated in a truly unified way, and besides the 33 part-of-speech tags originally contained in Penn Chinese Treebank, another 33 tags ending with ‘f’ are introduced. We leave this problem open for now and plan to address it in future work. Acknowledgments I would like to thank Professor Maosong Sun for many helpful discussions on topics of Chinese morphological and syntactic analysis. The author is supported by NSFC under Grant No. 60873174. Heartfelt thanks also go to the reviewers for many pertinent comments which have greatly improved the presentation of this paper. References S. Abney, S. Flickenger, C. Gdaniec, C. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. Procedure for quantitatively comparing the syntactic coverage of English grammars. In E. Black, editor, Proceedings of the workshop on Speech and Natural Language, HLT ’91, pages 306–311, Morristown, NJ, USA. Association for Computational Linguistics. Adam L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Daniel M. Bikel and David Chiang. 2000. Two statistical parsing models applied to the Chinese treebank. In Second Chinese Language Processing Workshop, pages 1–6, Hong Kong, China, October. Association for Computational Linguistics. Eugene Charniak and Mark Johnson. 2005. Coarse-tofine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, ACL ’05, pages 173–180, Morristown, NJ, USA. Association for Computational Linguistics. Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence, AAAI’97/IAAI’97, pages 598–603. AAAI Press. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31:25–70, March. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589–637. Pascale Fung, Grace Ngai, Yongsheng Yang, and Benfeng Chen. 2004. A maximum-entropy Chinese parser augmented by transformation-based learning. ACM Transactions on Asian Language Information Processing, 3:159–168, June. Jianfeng Gao, Andi Wu, Cheng-Ning Huang, Hong qiao Li, Xinsong Xia, and Hauwei Qin. 2004. Adaptive Chinese word segmentation. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics (ACL’04), Main Volume, pages 462–469, Barcelona, Spain, July. Jianfeng Gao, Mu Li, Andi Wu, and Chang-Ning Huang. 2005. Chinese word segmentation and named entity recognition: A pragmatic approach. Computational Linguistics, 31(4):531–574. Yoav Goldberg and Reut Tsarfaty. 2008. A single generative model for joint morphological segmentation and syntactic parsing. In Proceedings of ACL-08: HLT, pages 371–379, Columbus, Ohio, June. Association for Computational Linguistics. Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan L¨u. 2008. A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of ACL-08: HLT, pages 897–904, Columbus, Ohio, June. Association for Computational Linguistics. Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic adaptation of annotation standards: Chinese word segmentation and POS tagging – a case study. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 522–530, Suntec, Singapore, August. Association for Computational Linguistics. Canasai Kruengkrai, Kiyotaka Uchimoto, Jun’ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hitoshi Isahara. 2009. An error-driven word-character hybrid model for joint Chinese word segmentation and POS tagging. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Process1413 ing of the AFNLP, pages 513–521, Suntec, Singapore, August. Association for Computational Linguistics. Zhongguo Li and Maosong Sun. 2009. Punctuation as implicit annotations for Chinese word segmentation. Computational Linguistics, 35:505–512, December. Junhui Li, Guodong Zhou, and Hwee Tou Ng. 2010. Joint syntactic and semantic parsing of Chinese. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1108– 1117, Uppsala, Sweden, July. Association for Computational Linguistics. Xiaoqiang Luo. 2003. A maximum entropy Chinese character-based parser. In Michael Collins and Mark Steedman, editors, Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, pages 192–199. Hwee Tou Ng and Jin Kiat Low. 2004. Chinese part-ofspeech tagging: One-at-a-time or all-at-once? wordbased or character-based? In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 277– 284, Barcelona, Spain, July. Association for Computational Linguistics. Richard Sproat and Thomas Emerson. 2003. The first international Chinese word segmentation bakeoff. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pages 133–143, Sapporo, Japan, July. Association for Computational Linguistics. Richard Sproat, William Gale, Chilin Shih, and Nancy Chang. 1996. A stochastic finite-state wordsegmentation algorithm for Chinese. Computational Linguistics, 22(3):377–404. Honglin Sun and Daniel Jurafsky. 2004. Shallow semantc parsing of Chinese. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 249–256, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Mengqiu Wang, Kenji Sagae, and Teruko Mitamura. 2006. A fast, accurate deterministic parser for chinese. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 425–432, Sydney, Australia, July. Association for Computational Linguistics. Andi Wu and Jiang Zixin. 1998. Word segmentation in sentence analysis. In Proceedings of the 1998 International Conference on Chinese information processing, Beijing, China. Andi Wu. 2003. Customizable segmentation of morphologically derived words in Chinese. Computational Linguistics and Chinese language processing, 8(1):1– 28. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207–238. Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29–48. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 840–847, Prague, Czech Republic, June. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and POS tagging using a single perceptron. In Proceedings of ACL-08: HLT, pages 888–896, Columbus, Ohio, June. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2009. Transition-based parsing of the Chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies, IWPT ’09, pages 162–171, Morristown, NJ, USA. Association for Computational Linguistics. Hai Zhao. 2009. Character-level dependencies in Chinese: Usefulness and learning. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 879–887, Athens, Greece, March. Association for Computational Linguistics. Guodong Zhou and Jian Su. 2003. A Chinese efficient analyser integrating word segmentation, part-ofspeech tagging, partial parsing and full parsing. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing, pages 78–83, Sapporo, Japan, July. Association for Computational Linguistics. 1414
2011
141
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1415–1424, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Simple Measure to Assess Non-response Anselmo Pe˜nas and Alvaro Rodrigo UNED NLP & IR Group Juan del Rosal, 16 28040 Madrid, Spain {anselmo,[email protected]} Abstract There are several tasks where is preferable not responding than responding incorrectly. This idea is not new, but despite several previous attempts there isn’t a commonly accepted measure to assess non-response. We study here an extension of accuracy measure with this feature and a very easy to understand interpretation. The measure proposed (c@1) has a good balance of discrimination power, stability and sensitivity properties. We show also how this measure is able to reward systems that maintain the same number of correct answers and at the same time decrease the number of incorrect ones, by leaving some questions unanswered. This measure is well suited for tasks such as Reading Comprehension tests, where multiple choices per question are given, but only one is correct. 1 Introduction There is some tendency to consider that an incorrect result is simply the absence of a correct one. This is particularly true in the evaluation of Information Retrieval systems where, in fact, the absence of results sometimes is the worse output. However, there are scenarios where we should consider the possibility of not responding, because this behavior has more value than responding incorrectly. For example, during the process of introducing new features in a search engine it is important to preserve users’ confidence in the system. Thus, a system must decide whether it should give or not a result in the new fashion or keep on with the old kind of output. A similar example is the decision about showing or not ads related to the query. Showing wrong ads harms the business model more than showing nothing. A third example more related to Natural Language Processing is the Machine Reading evaluation through reading comprehension tests. In this case, where multiple choices for a question are offered, choosing a wrong option should be punished against leaving the question unanswered. In the latter case, the use of utility functions is a very common option. However, utility functions give arbitrary value to not responding and ignore the system’s behavior showed when it responds (see Section 2). To avoid this, we present c@1 measure (Section 2.2), as an extension of accuracy (the proportion of correctly answered questions). In Section 3 we show that no other extension produces a sensible measure. In Section 4 we evaluate c@1 in terms of stability, discrimination power and sensibility, and some real examples of its behavior are given in the context of Question Answering. Related work is discussed in Section 5. 2 Looking for the Value of Not Responding Lets take the scenario of Reading Comprehension tests to argue about the development of the measure. Our scenario assumes the following: • There are several questions. • Each question has several options. • One option is correct (and only one). The first step is to consider the possibility of not responding. If the system responds, then the assessment will be one of two: correct or wrong. But if 1415 the system doesn’t respond there is no assessment. Since every question has a correct answer, non response is not correct but it is not incorrect either. This is represented in contingency Table 1, where: • nac: number of questions for which the answer is correct • naw: number of questions for which the answer is incorrect • nu: number of questions not answered • n: number of questions (n = nac + naw + nu) Correct (C) Incorrect (¬C) Answered (A) nac naw Unanswered (¬A) nu Table 1: Contingency table for our scenario Let’s start studying a simple utility function able to establish the preference order we want: • -1 if question receives an incorrect response • 0 if question is left unanswered • 1 if question receives a correct response Let U(i) be the utility function that returns one of the above values for a given question i. Thus, if we want to consider n questions in the evaluation, the measure would be: UF = 1 n n ∑ i=1 U(i) = nac −naw n (1) The rationale of this utility function is intuitive: not answering adds no value and wrong answers add negative values. Positive values of UF indicate more correct answers than incorrect ones, while negative values indicate the opposite. However, the utility function is giving an arbitrary value to the preferences (-1, 0, 1). Now we want to interpret in some way the value that Formula (1) assigns to unanswered questions. For this purpose, we need to transform Formula (1) into a more meaningful measure with a parameter for the number of unanswered questions (nu). A monotonic transformation of (1) permit us to preserve the ranking produced by the measure. Let f(x)=0.5x+0.5 be the monotonic function to be used for the transformation. Applying this function to Formula (1) results in Formula (2): 0.5nac −naw n + 0.5 = 0.5 n [nac −naw + n] = = 0.5 n [nac −naw + nac + naw + nu] = 0.5 n [2nac + nu] = nac n + 0.5nu n (2) Measure (2) provides the same ranking of systems than measure (1). The first summand of Formula (2) corresponds to accuracy, while the second is adding an arbitrary constant weight of 0.5 to the proportion of unanswered questions. In other words, unanswered questions are receiving the same value as if half of them had been answered correctly. This does not seem correct given that not answering is being rewarded in the same proportion to all the systems, without taking into account the performance they have shown with the answered questions. We need to propose a more sensible estimation for the weight of unanswered questions. 2.1 A rationale for the Value of Unanswered Questions According to the utility function suggested, unanswered questions would have value as if half of them had been answered correctly. Why half and not other value? Even more, Why a constant value? Let’s generalize this idea and estate more clearly our hypothesis: Unanswered questions have the same value as if a proportion of them would have been answered correctly. We can express this idea according to contingency Table 1 in the following way: P(C) = P(C ∩A) + P(C ∩¬A) = = P(C ∩A) + P(C/¬A) ∗P(¬A) (3) P(C ∩A) can be estimated by nac/n, P(¬A) can be estimated by nu/n, and we have to estimate P(C/¬A). Our hypothesis is saying that P(C/¬A) 1416 is different from 0. The utility measure (2) corresponds to P(C) in Formula (3) where P(C/¬A) receives a constant value of 0.5. It is assuming arbitrarily that P(C/¬A) = P(C/A). Following this, our measure must consist of two parts: The overall accuracy and a better estimation of correctness over the unanswered questions. 2.2 The Measure Proposed: c@1 From the answered questions we have already observed the proportion of questions that received a correct answer (P(C ∩A) = nac/n). We can use this observation as our estimation for P(C/¬A) instead of the arbitrary value of 0.5. Thus, the measure we propose is c@1 (correctness at one) and is formally represented as follows: c@1 = nac n + nac n nu n = 1 n(nac + nac n nu) (4) The most important features of c@1 are: 1. A system that answers all the questions will receive a score equal to the traditional accuracy measure: nu=0 and therefore c@1=nac/n. 2. Unanswered questions will add value to c@1 as if they were answered with the accuracy already shown. 3. A system that does not return any answer would receive a score equal to 0 due to nac=0 in both summands. According to the reasoning above, we can interpret c@1 in terms of probability as P(C) where P(C/¬A) has been estimated with P(C ∩A). In the following section we will show that there is no other estimation for P(C/¬A) able to provide a reasonable evaluation measure. 3 Other Estimations for P(C/¬A) In this section we study whether other estimations of P(C/¬A) can provide a sensible measure for QA when unanswered questions are taken into account. They are: 1. P(C/¬A) ≡0 2. P(C/¬A) ≡1 3. P(C/¬A) ≡P(¬C/¬A) ≡0.5 4. P(C/¬A) ≡P(C/A) 5. P(C/¬A) ≡P(¬C/A) 3.1 P(C/¬A) ≡0 This estimation considers the absence of response as incorrect response and we have the traditional accuracy (nac/n). Obviously, this is against our purposes. 3.2 P(C/¬A) ≡1 This estimation considers all unanswered questions as correctly answered. This option is not reasonable and is given for completeness: systems giving no answer would get maximum score. 3.3 P(C/¬A) ≡P(¬C/¬A) ≡0.5 It could be argued that since we cannot have observations of correctness for unanswered questions, we should assume equiprobability between P(C/¬A) and P(¬C/¬A). In this case, P(C) corresponds to the expression (2) already discussed. As previously explained, in this case we are giving an arbitrary constant value to unanswered questions independently of the system’s performance shown with answered ones. This seems unfair. We should be aiming at rewarding those systems not responding instead of giving wrong answers, not reward the sole fact that the system is not responding. 3.4 P(C/¬A) ≡P(C/A) An alternative is to estimate the probability of correctness for the unanswered questions as the precision observed over the answered ones: P(C/A)= nac/(nac+ naw). In this case, our measure would be like the one shown in Formula (5): P(C) = P(C ∩A) + P(C/¬A) ∗P(¬A) = = P(C/A) ∗P(A) + P(C/A) ∗P(¬A) = = P(C/A) = nac nac + naw (5) The resulting measure is again the observed precision over the answered ones. This is not a sensible measure, as it would reward a cheating system that decides to leave all questions unanswered except one for which it is sure to have a correct answer. 1417 Furthermore, from the idea that P(C/¬A) is equal to P(C/A) the underlying assumption is that systems choose to answer or not to answer randomly, whereas we want to reward the systems that choose not responding because they are able to decide that their candidate options are wrong or because they are unable to decide which candidate is correct. 3.5 P(C/¬A) ≡P(¬C/A) The last option to be considered explores the idea that systems fail not responding in the same proportion that they fail when they give an answer (i.e. proportion of incorrect answers). Estimating P(C/¬A) as naw / (nac+ naw), the measure would be: P(C) = P(C ∩A) + P(C/¬A) ∗P(¬A) = = P(C ∩A) ∗P(¬C/A) ∗P(¬A) = = nac n + naw nac + naw ∗nu n (6) This measure is very easy to cheat. It is possible to obtain almost a perfect score just by answering incorrectly only one question and leaving unanswered the rest of the questions. 4 Evaluation of c@1 When a new measure is proposed, it is important to study the reliability of the results obtained using that measure. For this purpose, we have chosen the method described by Buckley and Voorhees (2000) for assessing the stability and discrimination power, as well as the method described by Voorhees and Buckley (2002) for examining the sensitivity of our measure. These methods have been used for studying IR metrics (showing similar results with the methods based on statistics (Sakai, 2006)), as well as for evaluating the reliability of other QA measures different to the ones studied here (Sakai, 2007a; Voorhees, 2002; Voorhees, 2003). We have compared the results over c@1 with the ones obtained using both accuracy and the utility function (UF) defined in Formula (1). This comparison is useful to show how confident can a researcher be with the results obtained using each evaluation measure. In the following subsections we will first show the data used for our study. Then, the experiments about stability and sensitivity will be described. 4.1 Data sets We used the test collections and runs from the Question Answering track at the Cross Language Evaluation Forum 2009 (CLEF) (Pe˜nas et al., 2010). The collection has a set of 500 questions with their answers. The 44 runs in different languages contain the human assessments for the answers given by actual participants. Systems could chose not to answer a question. In this case, they had the chance to submit their best candidate in order to assess the performance of their validation module (the one that decides whether to give or not the answer). This data collection allows us to compare c@1 and accuracy over the same runs. 4.2 Stability vs. Discrimination Power The more stable a measure is, the lower the probability of errors associated with the conclusion “system A is better than system B” is. Measures with a high error must be used more carefully performing more experiments than in the case of using a measure with lower error. In order to study the stability of c@1 and to compare it with accuracy we used the method described by Buckley and Voorhees (2000). This method allows also to study the number of times systems are deemed to be equivalent with respect to a certain measure, which reflects the discrimination power of that measure. The less discriminative the measure is, the more ties between systems there will be. This means that longer difference in scores will be needed for concluding which system is better (Buckley and Voorhees, 2000). The method works as follows: let S denote a set of runs. Let x and y denote a pair of runs from S. Let Q denote the entire evaluation collection. Let f represents the fuzziness value, which is the percent difference between scores such that if the difference is smaller than f then the two scores are deemed to be equivalent. We apply the algorithm of Figure 1 to obtain the information needed for computing the error rate (Formula (7)). Stability is inverse to this value, the lower the error rate is, the more stable the measure is. The same algorithm gives us the 1418 proportion of ties (Formula (8)), which we use for measuring discrimination power, that is the lower the proportion of ties is, the more discriminative the measure is. for each pair of runs x,y ϵ S for each trial from 1 to 100 Qi = select at random subcol of size c from Q; margin = f * max (M(x,Qi),M(y,Qi)); if(|M(x,Qi) - M(y,Qi)| < |margin|) EQM(x,y)++; else if(|M(x,Qi) > M(y,Qi)|) GTM(x,y)++; else GTM(y,x)++; Figure 1: Algorithm for computing EQM(x,y), GTM(x,y) and GTM(y,x) in the stability method We assume that for each measure the correct decision about whether run x is better than run y happens when there are more cases where the value of x is better than the value of y. Then, the number of times y is better than x is considered as the number of times the test is misleading, while the number of times the values of x and y are equivalent is considered the number of ties. On the other hand, it is clear that larger fuzziness values decrease the error rate but also decrease the discrimination power of a measure. Since a fixed fuzziness value might imply different trade-offs for different metrics, we decided to vary the fuzziness value from 0.01 to 0.10 (following the work by Sakai (2007b)) and to draw for each measure a proportionof-ties / error-rate curve. Figure 2 shows these curves for the c@1, accuracy and UF measures. In the Figure we can see how there is a consistent decrease of the error rate of all measures when the proportion of ties increases (this corresponds to the increase in the fuzziness value). Figure 2 shows that the curves of accuracy and c@1 are quite similar (slightly better behavior of c@1) , which means that they have a similar stability and discrimination power. The results suggest that the three measures are quite stable, having c@1 and accuracy a lower error rate than UF when the proportion of ties grows. These curves are similar to the ones obtained for Figure 2: Error-rate / Proportion of ties curves for accuracy, c@1 and UF with c = 250 other QA evaluation measures (Sakai, 2007a). 4.3 Sensitivity The swap-rate (Voorhees and Buckley, 2002) represents the chance of obtaining a discrepancy between two question sets (of the same size) as to whether a system is better than another given a certain difference bin. Looking at the swap-rates of all the difference performance bins, the performance difference required in order to conclude that a run is better than another for a given confidence value can be estimated. For example, if we want to know the required difference for concluding that system A is better than system B with a confidence of 95%, then we select the difference that represents the first bin where the swap-rate is lower or equal than 0.05. The sensitivity of the measure is the number of times among all the comparisons in the experiment where this performance difference is obtained (Sakai, 2007b). That is, the more comparisons accomplish the estimated performance difference, the more sensitive is the measure. The more sensitive the measure, the more useful it is for system discrimination. The swap method works as follows: let S denote a set of runs, let x and y denote a pair of runs from S. Let Q denote the entire evaluation collection. And let d denote a performance difference between two runs. Then, we first define 21 performance difference bins: the first bin represents performance differences between systems such that 0 ≤d < 0.01; the second bin represents differences such that 0.01 ≤d < 0.02; and the limits for the remaining bins increase by increments of 0.01, with the last bin containing all the differences equal or higher than 0.2. 1419 Error rateM = ∑ x,yϵS min(GTM(x, y), GTM(y, x)) ∑ x,yϵS(GTM(x, y) + GTM(y, x) + EQM(x, y)) (7) Prop TiesM = ∑ x,yϵS EQM(x, y) ∑ x,yϵS(GTM(x, y) + GTM(y, x) + EQM(x, y)) (8) Let BIN(d) denote a mapping from a difference d to one of the 21 bins where it belongs. Thus, algorithm in Figure 3 is applied for calculating the swap-rate of each bin. for each pair of runs x,y ϵ S for each trial from 1 to 100 select Qi , Q ′ i ⊂Q, where Qi ∩Q ′ i == ϕ and |Qi| == |Q ′ i| == c; dM(Qi) = M(x, Qi) −M(y, Qi); dM(Q ′ i) = M(x, Q ′ i) −M(y, Q ′ i); counter(BIN(|dM(Qi)|))++; if(dM(Qi) * dM(Q ′ i) < 0) swap counter(BIN(|dM(Qi)|))++; for each bin b swap rate(b) = swap counter(b)/counter(b); Figure 3: Algorithm for computing swap-rates (i) (ii) (iii) (iv) UF 0.17 0.48 35.12% 59.30% c@1 0.09 0.77 11.69% 58.40% accuracy 0.09 0.68 13.24% 55.00% Table 2: Results obtained applying the swap method to accuracy, c@1 and UF at 95% of confidence, with c = 250: (i) Absolute difference required; (ii) Highest value obtained; (iii) Relative difference required ((i)/(ii)); (iv) percentage of comparisons that accomplish the required difference (sensitivity) Given that Qi and Q ′ i must be disjoint, their size can only be up to half of the size of the original collection. Thus, we use the value c=250 for our experiment1. Table 2 shows the results obtained by applying the swap method to accuracy, c@1 and UF, with c = 250, swap-rate ≤5, and sensitivity given a confidence of 95% (Column (iv)). The range of values 1We use the same size for experiments in Section 4.2 for homogeneity reasons. are similar to the ones obtained for other measures according to (Sakai, 2007a). According to Column (i), a higher absolute difference is required for concluding that a system is better than another using UF. However, the relative difference is similar to the one required by c@1. Thus, similar percentage of comparisons using c@1 and UF accomplish the required difference (Column (iv)). These results show that their sensitivity values are similar, and higher than the value for accuracy. 4.4 Qualitative evaluation In addition to the theoretical study, we undertook a study to interpret the results obtained by real systems in a real scenario. The aim is to compare the results of the proposed c@1 measure with accuracy in order to compare their behavior. For this purpose we inspected the real systems runs in the data set. System c@1 accuracy (i) (ii) (iii) icia091ro 0.58 0.47 237 156 107 uaic092ro 0.47 0.47 236 264 0 loga092de 0.44 0.37 187 230 83 base092de 0.38 0.38 189 311 0 Table 3: Example of system results in QA@CLEF 2009. (i) number of questions correctly answered; (ii) number of questions incorrectly answered; (iii) number of unanswered questions. Table 3 shows a couple of examples where two systems have answered correctly a similar number of questions. For example, this is the case of icia091ro and uaic092ro that, therefore, obtain almost the same accuracy value. However, icia091ro has returned less incorrect answers by not responding some questions. This is the kind of behavior we want to measure and reward. Table 3 shows how accuracy is sensitive only to the number of correct answers whereas c@1 is able to distinguish when 1420 systems keep the number of correct answers but reduce the number of incorrect ones by not responding to some. The same reasoning is applicable to loga092de compared to base092de for German. 5 Related Work The decision of leaving a query without response is related to the system ability to measure accurately its self-confidence about the correctness of their candidate answers. Although there have been one attempt to make the self-confidence score explicit and use it (Herrera et al., 2005), rankings are, usually, the implicit way to evaluate this self-confidence. Mean Reciprocal Rank (MRR) has traditionally been used to evaluate Question Answering systems when several answers per question were allowed and given in order (Fukumoto et al., 2002; Voorhees and Tice, 1999). However, as it occurs with Accuracy (proportion of questions correctly answered), the risk of giving a wrong answer is always preferred better than not responding. The QA track at TREC 2001 was the first evaluation campaign in which systems were allowed to leave a question unanswered (Voorhees, 2001). The main evaluation measure was MRR, but performance was also measured by means of the percentage of answered questions and the portion of them that were correctly answered. However, no combination of these two values into a unique measure was proposed. TREC 2002 discarded the idea of including unanswered questions in the evaluation. Only one answer by question was allowed and all answers had to be ranked according to the system’s self-confidence in the correctness of the answer. Systems were evaluated by means of Confidence Weighted Score (CWS), rewarding those systems able to provide more correct answers at the top of the ranking (Voorhees, 2002). The formulation of CWS is the following: CWS = 1 n n ∑ i=1 C(i) i (9) Where n is the number of questions, and C(i) is the number of correct answers up to the position i in the ranking. Formally: C(i) = i ∑ j=1 I(j) (10) where I(j) is a function that returns 1 if answer j is correct and 0 if it is not. The formulation of CWS is inspired by the Average Precision (AP) over the ranking for one question: AP = 1 R ∑ r I(r)C(r) r (11) where R is the number of known relevant results for a topic, and r is a position in the ranking. Since only one answer per question is requested, R equals to n (the number of questions) in CWS. However, in AP formula the summands belong to the positions of the ranking where there is a relevant result (product of I(r)), whereas in CWS every position of the ranking add value to the measure regardless of whether there is a relevant result or not in that position. Therefore, CWS gives much more value to some questions over others: questions whose answers are at the top of the ranking are giving almost the complete value to CWS, whereas those questions whose answers are at the bottom of the ranking are almost not counting in the evaluation. Although CWS was aimed at promoting the development of better self-confidence scores, it was discussed as a measure for evaluating QA systems performance. CWS was discarded in the following campaigns of TREC in favor of accuracy (Voorhees, 2003). Subsequently, accuracy was adopted by the QA track at the Cross-Language Evaluation Forum from the beginning (Magnini et al., 2005). There was an attempt to consider explicitly systems confidence self-score (Herrera et al., 2005): the use of the Pearson’s correlation coefficient and the proposal of measures K and K1 (see Formula 12). These measures are based in a utility function that returns -1 if the answer is incorrect and 1 if it is correct. This positive or negative value is weighted with the normalized confidence self-score given by the system to each answer. K is a variation of K1 for being used in evaluations where more than an answer per question is allowed. If the self-score is 0, then the answer is ignored and thus, this measure is permitting to leave a question unanswered. A system that always returns a 1421 K1 = ∑ iϵ{correctanswers} self score(i) − ∑ iϵ{incorrectanswers} self score(i) n ϵ [−1, 1] (12) self-score equals to 0 (no answer) obtains a K1 value of 0. However, the final value of K1 is difficult to interpret: a positive value does not indicate necessarily more correct answers than incorrect ones, but that the sum of scores of correct answers is higher than the sum resulting from the scores of incorrect answers. This could explain the little success of this measure for evaluating QA systems in favor, again, of accuracy measure. Accuracy is the simplest and most intuitive evaluation measure. At the same time is able to reward those systems showing good performance. However, together with MRR belongs to the set of measures that pushes in favor of giving always a response, even wrong, since there is no punishment for it. Thus, the development of better validation technologies (systems able to decide whether the candidate answers are correct or not) is not promoted, despite new QA architectures require them. In effect, most QA systems during TREC and CLEF campaigns had an upper bound of accuracy around 60%. An explanation for this was the effect of error propagation in the most extended pipeline architecture: Passage Retrieval, Answer Extraction, Answer Ranking. Even with performances higher than 80% in each step, the overall performance drops dramatically just because of the product of partial performances. Thus, a way to break the pipeline architecture is the development of a module able to decide whether the QA system must continue or not its searching for new candidate answers: the Answer Validation module. This idea is behind the architecture of IBM’s Watson (DeepQA project) that successfully participated at Jeopardy (Ferrucci et al., 2010). In 2006, the first Answer Validation Exercise (AVE) proposed an evaluation task to advance the state of the art in Answer Validation technologies (Pe˜nas et al., 2007). The starting point was the reformulation of Answer Validation as a Recognizing Textual Entailment problem, under the assumption that hypotheses can be automatically generated by combining the question with the candidate answer (Pe˜nas et al., 2008a). Thus, validation was seen as a binary classification problem whose evaluation must deal with unbalanced collections (different proportion of positive and negative examples, correct and incorrect answers). For this reason, AVE 2006 used F-measure based on precision and recall for correct answers selection (Pe˜nas et al., 2007). Other option is an evaluation based on the analysis of Receiver Operating Characteristic (ROC) space, sometimes preferred for classification tasks with unbalanced collections. A comparison of both approaches for Answer Validation evaluation is provided in (Rodrigo et al., 2011). AVE 2007 changed its evaluation methodology with two objectives: the first one was to bring systems based on Textual Entailment to the Automatic Hypothesis Generation problem which is not part itself of the Recognising Textual Entailment (RTE) task but an Answer Validation need. The second one was an attempt to quantify the gain in QA performance when more sophisticated validation modules are introduced (Pe˜nas et al., 2008b). With this aim, several measures were proposed to assess: the correct selection of candidate answers, the correct rejection of wrong answer and finally estimate the potential gain (in terms of accuracy) that Answer Validation modules can provide to QA (Rodrigo et al., 2008). The idea was to give value to the correctly rejected answers as if they could be correctly answered with the accuracy shown selecting the correct answers. This extension of accuracy in the Answer Validation scenario inspired the initial development of c@1 considering non-response. 6 Conclusions The central idea of this work is that not responding has more value than responding incorrectly. This idea is not new, but despite several attempts in TREC and CLEF there wasn’t a commonly accepted mea1422 sure to assess non-response. We have studied here an extension of accuracy measure with this feature, and with a very easy to understand rationale: Unanswered questions have the same value as if a proportion of them had been answered correctly, and the value they add is related to the performance (accuracy) observed over the answered questions. We have shown that no other estimation of this value produce a sensible measure. We have shown also that the proposed measure c@1 has a good balance of discrimination power, stability and sensitivity properties. Finally, we have shown how this measure rewards systems able to maintain the same number of correct answers and at the same time reduce the number of incorrect ones, by leaving some questions unanswered. Among other tasks, measure c@1 is well suited for evaluating Reading Comprehension tests, where multiple choices per question are given, but only one is correct. Non-response must be assessed if we want to measure effective reading and not just the ability to rank options. This is clearly not enough for the development of reading technologies. Acknowledgments This work has been partially supported by the Research Network MA2VICMR (S2009/TIC-1542) and Holopedia project (TIN2010-21128-C02). References Chris Buckley and Ellen M. Voorhees. 2000. Evaluating evaluation measure stability. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 33–40. ACM. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A. Kalyanpur, Adam Lally, J. William Murdock, Eric Nyberg, John Prager, Nico Schlaefer, and Chris Welty. 2010. Building Watson: An Overview of the DeepQA Project. AI Magazine, 31(3). Junichi Fukumoto, Tsuneaki Kato, and Fumito Masui. 2002. Question and Answering Challenge (QAC1): Question Answering Evaluation at NTCIR Workshop 3. In Working Notes of the Third NTCIR Workshop Meeting Part IV: Question Answering Challenge (QAC-1), pages 1-10. Jes´us Herrera, Anselmo Pe˜nas, and Felisa Verdejo. 2005. Question Answering Pilot Task at CLEF 2004. In Multilingual Information Access for Text, Speech and Images, CLEF 2004, Revised Selected Papers., volume 3491 of Lecture Notes in Computer Science, Springer, pages 581–590. Bernardo Magnini, Alessandro Vallin, Christelle Ayache, Gregor Erbach, Anselmo Pe˜nas, Maarten de Rijke, Paulo Rocha, Kiril Ivanov Simov, and Richard F. E. Sutcliffe. 2005. Overview of the CLEF 2004 Multilingual Question Answering Track. In Multilingual Information Access for Text, Speech and Images, CLEF 2004, Revised Selected Papers., volume 3491 of Lecture Notes in Computer Science, Springer, pages 371– 391. Anselmo Pe˜nas, ´Alvaro Rodrigo, Valent´ın Sama, and Felisa Verdejo. 2007. Overview of the Answer Validation Exercise 2006. In Evaluation of Multilingual and Multi-modal Information Retrieval, CLEF 2006, Revised Selected Papers, volume 4730 of Lecture Notes in Computer Science, Springer, pages 257–264. Anselmo Pe˜nas, ´Alvaro Rodrigo, Valent´ın Sama, and Felisa Verdejo. 2008a. Testing the Reasoning for Question Answering Validation. In Journal of Logic and Computation. 18(3), pages 459–474. Anselmo Pe˜nas, ´Alvaro Rodrigo, and Felisa Verdejo. 2008b. Overview of the Answer Validation Exercise 2007. In Advances in Multilingual and Multimodal Information Retrieval, CLEF 2007, Revised Selected Papers, volume 5152 of Lecture Notes in Computer Science, Springer, pages 237–248. Anselmo Pe˜nas, Pamela Forner, Richard Sutcliffe, ´Alvaro Rodrigo, Corina Forascu, I˜naki Alegria, Danilo Giampiccolo, Nicolas Moreau, and Petya Osenova. 2010. Overview of ResPubliQA 2009: Question Answering Evaluation over European Legislation. In Multilingual Information Access Evaluation I. Text Retrieval Experiments, CLEF 2009, Revised Selected Papers, volume 6241 of Lecture Notes in Computer Science, Springer. Alvaro Rodrigo, Anselmo Pe˜nas, and Felisa Verdejo. 2008. Evaluating Answer Validation in Multi-stream Question Answering. In Proceedings of the Second International Workshop on Evaluating Information Access (EVIA 2008). Alvaro Rodrigo, Anselmo Pe˜nas, and Felisa Verdejo. 2011. Evaluating Question Answering Validation as a classification problem. Language Resources and Evaluation, Springer Netherlands (In Press). Tetsuya Sakai. 2006. Evaluating Evaluation Metrics based on the Bootstrap. In SIGIR 2006: Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, August 6-11, 2006, pages 525–532. 1423 Tetsuya Sakai. 2007a. On the Reliability of Factoid Question Answering Evaluation. ACM Trans. Asian Lang. Inf. Process., 6(1). Tetsuya Sakai. 2007b. On the reliability of information retrieval metrics based on graded relevance. Inf. Process. Manage., 43(2):531–548. Ellen M. Voorhees and Chris Buckley. 2002. The effect of Topic Set Size on Retrieval Experiment Error. In SIGIR ’02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval, pages 316–323. Ellen M. Voorhees and Dawn M. Tice. 1999. The TREC8 Question Answering Track Evaluation. In Text Retrieval Conference TREC-8, pages 83–105. Ellen M. Voorhees. 2001. Overview of the TREC 2001 Question Answering Track. In E. M. voorhees, D. K. Harman, editors: Proceedings of the Tenth Text REtrieval Conference (TREC 2001). NIST Special Publication 500-250. Ellen M. Voorhees. 2002. Overview of TREC 2002 Question Answering Track. In E.M. Voorhees, L. P. Buckland, editors: Proceedings of the Eleventh Text REtrieval Conference (TREC 2002). NIST Publication 500-251. Ellen M. Voorhees. 2003. Overview of the TREC 2003 Question Answering Track. In Proceedings of the Twelfth Text REtrieval Conference (TREC 2003). 1424
2011
142
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1425–1434, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Improving Question Recommendation by Exploiting Information Need Shuguang Li Department of Computer Science University of York, YO10 5DD, UK [email protected] Suresh Manandhar Department of Computer Science University of York, YO10 5DD, UK [email protected] Abstract In this paper we address the problem of question recommendation from large archives of community question answering data by exploiting the users’ information needs. Our experimental results indicate that questions based on the same or similar information need can provide excellent question recommendation. We show that translation model can be effectively utilized to predict the information need given only the user’s query question. Experiments show that the proposed information need prediction approach can improve the performance of question recommendation. 1 Introduction There has recently been a rapid growth in the number of community question answering (CQA) services such as Yahoo! Answers1, Askville2 and WikiAnswer3 where people answer questions posted by other users. These CQA services have built up very large archives of questions and their answers. They provide a valuable resource for question answering research. Table 1 is an example from Yahoo! Answers web site. In the CQA archives, the title part is the user’s query question, and the user’s information need is usually expressed as natural language statements mixed with questions expressing their interests in the question body part. In order to avoid the lag time involved with waiting for a personal response and to enable high quali1http://answers.yahoo.com 2http://askville.amazon.com 3http://wiki.answers.com ty answers from the archives to be retrieved, we need to search CQA archives of previous questions that are closely associated with answers. If a question is found to be interesting to the user, then a previous answer can be provided with very little delay. Question search and question recommendation are proposed to facilitate finding highly relevant or potentially interesting questions. Given a user’s question as the query, question search tries to return the most semantically similar questions from the question archives. As the complement of question search, we define question recommendation as recommending questions whose information need is the same or similar to the user’s original question. For example, the question “What aspects of my computer do I need to upgrade ...” with the information need “... making a skate movie, my computer freezes, ...” and the question “What is the most cost effective way to expend memory space ...” with information need “... in need of more space for music and pictures ...” are both good recommendation questions for the user in Table 1. So the recommended questions are not necessarily identical or similar to the query question. In this paper, we discuss methods for question recommendation based on using the similarity between information need in the archive. We also propose two models to predict the information need based on the query question even if there’s no information need expressed in the body of the question. We show that with the proposed models it is possible to recommend questions that have the same or similar information need. The remainder of the paper is structured as fol1425 Q Title If I want a faster computer should I buy more memory or storage space? ... Q Body I edit pictures and videos so I need them to work quickly. Any advice? Answer ... If you are running out of space on your hard drive, then ... to boost your computer speed usually requires more RAM ... Table 1: Yahoo! Answers question example lows. In section 2, we briefly describe the related work on question search and recommendation. Section 3 addresses in detail how we measure the similarity between short texts. Section 4 describes two models for information need prediction that we use for the experiment. Section 5 tests the performance of the proposed models for the task of question recommendation. Section 7 is the conclusion of this paper. 2 Related Work 2.1 Question Search Burke et al. (1997) combined a lexical metric and a simple semantic knowledge-based (WordNet) similarity method to retrieve semantically similar questions from frequently asked question (FAQ) data. Jeon et al. (2005a) retrieved semantically similar questions from Korean CQA data by calculating the similarity between their answers. The assumption behind their research is that questions with very similar answers tend to be semantically similar. Jeon et al. (2005b) also discussed methods for grouping similar questions based on using the similarity between answers in the archive. These grouped question pairs were further used as training data to estimate probabilities for a translation-based question retrieval model. Wang et al. (2009) proposed a tree kernel framework to find similar questions in the CQA archive based on syntactic tree structures. Wang et al. (2010) mined lexical and syntactic features to detect question sentences in CQA data. 2.2 Question Recommendation Wu et al. (2008) presented an incremental automatic question recommendation framework based on probabilistic latent semantic analysis. Question recommendation in their work considered both the users’ interests and feedback. Duan et al. (2008) made use of a tree-cut model to represent questions as graphs of topic terms. Questions were recommended based on this topic graph. The recommended questions can provide different aspects around the topic of the query question. The above question search and recommendation research provide different ways to retrieve questions from large archives of question answering data. However, none of them considers the similarity or diversity between questions by exploring their information needs. 3 Short Text Similarity Measures In question retrieval systems accurate similarity measures between documents are crucial. Most traditional techniques for measuring the similarity between two documents mainly focus on comparing word co-occurrences. The methods employing this strategy for documents can usually achieve good results, because they may share more common words than short text snippets. However the state-of-theart techniques usually fail to achieve desired results due to short questions and information need texts. In order to measure the similarity between short texts, we make use of three kinds of text similarity measures: TFIDF based, Knowledge based and Latent Dirichlet Allocation (LDA) based similarity measures in this paper. We will compare their performance for the task of question recommendation in the experiment section. 3.1 TFIDF Baeza-Yates and Ribeiro-Neto (1999) provides a TFIDF method to calculate the similarity between two texts. Each document is represented by a term vector using TFIDF score. The similarity between two text Di and Dj is the cosine similarity in the vector space model: cos(Di, Dj) = DT i Dj ∥Di∥∥Dj∥ 1426 This method is used in most information retrieval systems as it is both efficient and effective. However if the query text contains only one or two words this method will be biased to shorter answer texts (Jeon et al., 2005a). We also found that in CQA data short contents in the question body cannot provide any information about the users’ information needs. Based on the above two reasons, in the test data sets we do not include the questions whose information need parts contain only a few noninformative words . 3.2 Knowledge-based Measure Mihalcea et al. (2006) proposed several knowledgebased methods for measuring the semantic level similarity of texts to solve the lexical chasm problem between short texts. These knowledge-based similarity measures were derived from word semantic similarity by making use of WordNet. The evaluation on a paraphrase recognition task showed that knowledgebased measures outperform the simpler lexical level approach. We follow the definition in (Mihalcea et al., 2006) to derive a text-to-text similarity metric mcs for two given texts Di and Dj: mcs(Di, Dj) = P w∈Di maxSim(w, Dj) ∗idf(w) P w∈Di idf(w) + P w∈Dj maxSim(w, Di) ∗idf(w) P w∈Dj idf(w) For each word w in Di, maxSim(w, Dj) computes the maximum semantic similarity between w and any word in Dj. In this paper we choose lin (Lin, 1998) and jcn (Jiang and Conrath, 1997) to compute the word-to-word semantic similarity. We only choose nouns and verbs for calculating mcs. Additionally, when w is a noun we restrict the words in document Di (and Dj) to just nouns. Similarly, when w is a verb, we restrict the words in document Di (and Dj) to just verbs. 3.3 Probabilistic Topic Model Celikyilmaz et al. (2010) presented probabilistic topic model based methods to measure the similarity between question and candidate answers. The candidate answers were ranked based on the hidden topics discovered by Latent Dirichlet Allocation (LDA) methods. In contrast to the TFIDF method which measures “common words”, short texts are not compared to each other directly in probabilistic topic models. Instead, the texts are compared using some “thirdparty” topics that relate to them. A passage D in the retrieved documents (document collection) is represented as a mixture of fixed topics, with topic z getting weight θ(D) z in passage D and each topic is a distribution over a finite vocabulary of words, with word w having a probability φ(z) w in topic z. Gibbs Sampling can be used to estimate the corresponding expected posterior probabilities P(z|D) = ˆθ(D) z and P(w|z) = ˆφ(z) w (Griffiths and Steyvers, 2004). In this paper we use two LDA based similarity measures in (Celikyilmaz et al., 2010) to measure the similarity between short information need texts. The first LDA similarity method uses KL divergence to measure the similarity between two documents under each given topic: simLDA1(Di, Dj) = 1 K K X k=1 10W(D(z=k) i ,D(z=k) j ) W(D(z=k) i , D(z=k) j ) = −KL(D(z=k) i ∥ D(z=k) i + D(z=k) j 2 ) −KL(D(z=k) j ∥ D(z=k) i + D(z=k) j 2 ) W(D(z=k) i , D(z=k) j ) calculates the similarity between two documents under topic z = k using KL divergence measure. D(z=k) i is the probability distribution of words in document Di given a fixed topic z. The second LDA similarity measure from (Griffiths and Steyvers, 2004) treats each document as a probability distribution of topics: simLDA2(Di, Dj) = 10W(ˆθ(Di),ˆθ(Dj)) where ˆθ(Di) is document Di’s probability distribution of topics as defined earlier. 1427 4 Information Need Prediction using Statistical Machine Translation Model There are two reasons that we need to predict information need. It is often the case that the query question does not have a question body part. So we need a model to predict the information need part based on the query question in order to recommend questions based on the similarity of their information needs. Another reason is that information need prediction plays a crucial part not only in Question Answering but also in information retrieval (Liu et al., 2008). In this paper we propose an information need prediction method based on a statistical machine translation model. 4.1 Statistical Machine Translation Model (f(s), e(s)), s = 1,...,S is a parallel corpus. In a sentence pair (f, e), source language String, f = f1f2...fJ has J words, and e = e1e2...eI has I words. And alignment a = a1a2...aJ represents the mapping information from source language words to target words. Statistical machine translation models estimate Pr(f|e), the translation probability from source language string e to target language string f (Och et al., 2003): Pr(f|e) = X a Pr(f, a|e) EM-algorithm is usually used to train the alignment models to estimate lexicon parameters p(f|e). In E-step, the counts for one sentence pair (f ,e) are: c(f|e; f, e) = X a Pr(a|f, e) X i,j δ(f, fj)δ(e, eaj) Pr(a|f, e) = Pr(f, a|e)/Pr(a|e) In the M-step, lexicon parameters become: p(f|e) ∝ X s c(f|e; f(s), e(s)) Different alignment models such as IBM-1 to IBM-5 (Brown et al., 1993) and HMM model (Och and Ney, 2000) provide different decompositions of Pr(f, a|e). For different alignment models different approaches were proposed to estimate the corresponding alignments and parameters. The details can be found in (Och et al., 2003; Brown et al., 1993). 4.2 Information Need Prediction After estimating the statistical translation probabilities, we treat the information need prediction as the process of ranking words by p(w|Q), the probability of generating word w from question Q: P(w|Q) = λ X t∈Q Ptr(w|t)P(t|Q)+(1−λ)P(w|C) The word-to-word translation probability Ptr(w|t) is the probability of word w is translated from a word t in question Q using the translation model. The above formula uses linear interpolation smoothing of the document model with the background language model P(t|C). λ is the smoothing parameter. P(t|Q) and P(t|C) are estimated using the maximum likelihood estimator. One important consideration is that statistical machine translation models first estimate Pr(f|e) and then calculate Pr(e|f) using Bayes’ theorem to minimize ordering errors (Brown et al., 1993): Pr(e|f) = Pr(f|e)Pr(e) Pr(f) But in this paper, we skip this step as we found out the order of words in information need part is not an important factor. In our collected CQA archive, question title and information need pairs can be considered as a type of parallel corpus, which is used for estimating word-to-word translation probabilities. More specifically, we estimated the IBM-4 model by GIZA++4 with the question part as the source language and information need part as the target language. 5 Experiments and Results 5.1 Text Preprocessing The questions posted on community QA sites often contain spelling or grammar errors. These errors in4http://fjoch.com/GIZA++.html 1428 Test c Test t Methods MRR Precision@5 Precision@10 MRR Precision@5 Precision@10 TFIDF 84.2% 67.1% 61.9% 92.8% 74.8% 63.3% Knowledge1 82.2% 65.0% 65.6% 78.1% 67.0% 69.6% Knowledge2 76.7% 54.9% 59.3% 61.6% 53.3% 58.2% LDA1 92.5% 68.8% 64.7% 91.8% 75.4% 69.8% LDA2 61.5% 55.3% 60.2% 52.1% 57.4% 54.5% Table 2: Question recommendation results without information need prediction Test c Test t Methods MRR Precision@5 Precision@10 MRR Precision@5 Precision@10 TFIDF 86.2% 70.8% 64.3% 95.1% 77.8% 69.3% Knowledge1 82.2% 65.0% 66.6% 76.7% 68.0% 68.7% Knowledge2 76.7% 54.9% 60.2% 61.6% 53.3% 58.2% LDA1 95.8% 72.4% 68.2% 96.2% 79.5% 69.2% LDA2 61.5% 55.3% 58.9% 68.1% 58.3% 53.9% Table 3: Question recommendation results with information need predicted by translation model fluence the calculation of similarity and the performance of information retrieval (Zhao et al., 2007; Bunescu and Huang, 2010). In this paper, we use an open source software afterthedeadline5 to automatically correct the spelling errors in the question and information need texts first. We also made use of Web 1T 5-gram6 to implement an N-Gram based method (Cheng et al., 2008) to further filter out the false positive corrections and re-rank correction suggestions (Mudge, 2010). The texts are tagged by Brill’s Part-of-Speech Tagger7 as the rule-based tagger is more robust than the state-of-art statistical taggers for raw web contents. This tagging information is only used for WordNet similarity calculation. Stop word removal and lemmatization are applied to the all the raw texts before feeding into machine translation model training, the LDA model estimating and similarity calculation. 5.2 Construction of Training and Testing Sets We made use of the questions crawled from Yahoo! Answers for the estimating models and evaluation. More specifically, we obtained 2 million questions under two categories at Yahoo! Answers: ‘travel’ 5http://afterthedeadline.com 6http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?cata logId=LDC2006T13 7http://www.umiacs.umd.edu/ jimmylin/resources.html (1 million), and ‘computers&internet’ (1 million). Depending on whether the best answers have been chosen by the asker, questions from Yahoo! answers can be divided into ‘resolved’ and ‘unresolved’ categories. From each of the above two categories, we randomly selected 200 resolved questions to construct two testing data sets: ‘Test t’ (‘travel’), and ‘Test c’ (‘computers&internet’). In order to measure the information need similarity in our experiment we selected only those questions whose information needs part contained at least 3 informative words after stop word removal. The rest of the questions ‘Train t’ and ‘Train c’ under the two categories are left for estimating the LDA topic models and the translation models. We will show how we obtain these models later. 5.3 Experimental Setup For each question (query question) in ‘Test t’ or ‘Test c’, we used the words in the question title part as the main search query and the other words in the information need part as search query expansion to retrieve candidate recommended questions from Yahoo! Answers website. We obtained an average of 154 resolved questions under ‘travel’ or ‘computers&internet’ category, and three assessors were involved in the manual judgments. Given a question returned by a recommendation 1429 method, two assessors are asked to label it with ‘good’ or ‘bad’. The third assessor will judge the conflicts. The assessors are also asked to read the information need and answer parts. If a recommended question is considered to express the same or similar information need, the assessor will label it ‘good’; otherwise, the assessor will label it as ‘bad’. Three measures for evaluating the recommendation performance are utilized. They are Mean Reciprocal Rank (MRR), top five prediction accuracy (precision@5) and top ten prediction accuracies (precision@10) (Voorhees and Tice, 2004; Cao et al., 2008). In MRR the reciprocal rank of a query question is the multiplicative inverse of the rank of the first ‘good’ recommended question. The top five prediction accuracy for a query question is the number of ‘good’ recommended questions out of the top five ranked questions and the top ten accuracy is calculated out of the top ten ranked questions. 5.4 Similarity Measure The first experiment conducted question recommendation based on their information need parts. Different text similarity methods described in section 3 were used to measure the similarity between the information need texts. In TFIDF similarity measure (TFIDF), the idf values for each word were computed from frequency counts over the entire Aquaint corpus8. For calculating the word-to-word knowledge-based similarity, a WordNet::Similarity Java implementation9 of the similarity measures lin (Knowledge2) and jcn (Knowledge1) is used in this paper. For calculating topic model based similarity, we estimated two LDA models from ’Train t’ and ’Train c’ using GibbsLDA++10. We treated each question including the question title and the information need part as a single document of a sequence of words. These documents were preprocessed before being fed into LDA model. 1800 iterations for Gibbs sampling 200 topics parameters were set for each LDA model estimation. The results in table 2 show that TFIDF and LDA1 methods perform better for recommending questions than the others. After further analysis of the questions recommended by both methods, we discov8http://ldc.upenn.edu/Catalog/docs/LDC2002T31 9http://cogs.susx.ac.uk/users/drh21/ 10http://gibbslda.sourceforge.net Q1: If I want a faster computer should I buy more memory or storage space? InfoN If I want a faster computer should I buy more memory or storage space? Whats the difference? I edit pictures and videos so I need them to work quickly. ... RQ1 Would buying 1gb memory upgrade make my computer faster? InfoN I have an inspiron B130. It has 512mb memory now. I would add another 1gb into 2nd slot ... RQ2 whats the difference between memory and hard drive space on a computer and why is.....? InfoN see I am starting edit videos on my computer but i am running out of space. why is so expensive to buy memory but not external drives? ... Q2: Where should my family go for spring break? InfoN ... family wants to go somewhere for a couple days during spring break ... prefers a warmer climate and we live in IL, so it shouldn’t be SUPER far away. ... a family road trip. ... RQ1 Whats a cheap travel destination for spring break? InfoN I live in houston texas and i’m trying to find i inexpensive place to go for spring break with my family.My parents don’t want to spend a lot of money due to the economy crisis, ... a fun road trip... RQ2 Alright you creative deal-seekers, I need some help in planning a spring break trip for my family InfoN Spring break starts March 13th and goes until the 21st ... Someplace WARM!!! Family-oriented hotel/resort ... North American Continent (Mexico, America, Jamaica, Bahamas, etc.) Cost= Around $5,000 ... Table 4: Question recommendation results by LDA measuring the similarity between information needs 1430 ered that the ordering of the recommended questions from TFIDF and LDA1 are quite different. TFIDF similarity method prefers texts with more common words, while the LDA1 method can find the relation between the non-common words between short texts based on a series of third-party topics. The LDA1 method outperforms the TFIDF method in two ways: (1) the top recommended questions’ information needs share less common words with the query question’s; (2) the top recommended questions span wider topics. The questions highly recommended by LDA1 can suggest more useful topics to the user. Knowledge-based methods are also shown to perform worse than TFIDF and LDA1. We found that some words were mis-tagged so that they were not included in the word-to-word similarity calculation. Another reason for the worse performance is that the words out of the WordNet dictionary were also not included in the similarity calculation. The Mean Reciprocal Rank score for TFIDF and LDA1 are more than 80%. That is to say, we are able to recommend questions to the users by measuring their information needs. The first two recommended questions for Q1 and Q2 using LDA1 method are shown in table 4. InfoN is the information need part associated with each question. In the preprocessing step, some words were successfully corrected such as “What should I do this saturday? ... and staying in a hotell ...” and “my faimly is traveling to florda ...”. However, there are still a small number of texts such as “How come my Gforce visualization doesn’t work?” and “Do i need an Id to travel from new york to maimi?” failed to be corrected. So in the future, a better method is expected to correct these failure cases. 5.5 Information Need Prediction There are some retrieved questions whose information need parts are empty or become empty or almost empty (one or two words left) after the preprocessing step. The average number of such retrieved questions for each query question is 10 in our experiment. The similarity ranking scores of these questions are quite low or zero in the previous experiment. In this experiment, we will apply information need prediction to the questions whose information needs are missing in order to find out whether we improve the recommendation task. The question and information need pairs in both ‘Train t’ and ‘Train c’ training sets were used to train two IBM-4 translation models by GIZA++ toolkit. These pairs were also preprocessed before training. And the pairs whose information need part become empty after preprocessing were disregarded. During the experiment, we found that some of the generated words in the information need parts are themselves. This is caused by the self translation problem in translation model: the highest translation score for a word is usually given to itself if the target and source languages are the same (Xue et al., 2008). This has always been a tough question: not using self-translated words can reduce retrieval performance as the information need parts need the terms to represent the semantic meanings; using self-translated words does not take advantage of the translation approach. To tackle this problem, we control the number of the words predicted by the translation model to be exactly twice the number of words in the corresponding preprocessed question. The predicted information need words for the retrieved questions are shown in Table 5. In Q1, the information need behind question “recommend website for custom built computer parts” may imply that the users need to know some information about building computer parts such as “ram” and “motherboard” for a different purpose such as “gaming”. While in Q2, the user may want to compare computers in different brands such as “dell” and “mac” or consider the “price” factor for “purchasing a laptop for a college student”. We also did a small scale comparison between the generated information needs against the real questions whose information need parts are not empty. Q3 and Q4 in Table 5 are two examples. The original information need for Q3 is “looking for beautiful beaches and other things to do such as museums, zoos, shopping, and great seafood” in CQA. The generated content for Q3 contains words in wider topics such as ‘wedding’, ‘surf’ and the price information (‘cheap’). This reflects that there are some other users asking similar questions with the same or other interests. From the results in Table 3, we can see that the performance of most similarity methods were improved by making use of information need predic1431 tion. Different similarity measures received different degrees of improvement. LDA1 obtained the highest improvement followed by the TFIDF based method. These two approaches are more sensitive to the contents generated by a translation model. However we found out that in some cases the LDA1 model failed to give higher scores to good recommendation questions. For example, Q5, Q6, and Q7 in table 5 were retrieved as recommendation candidates for the query question in Table 1. All of the three questions were good recommendation candidates, but only Q6 ranked fifth while Q5 and Q7 were out of the top 30 by LDA1 method. Moreover, in a small number of cases bad recommendation questions received higher scores and jeopardized the performance. For example, for query question “How can you add subtitles to videos?” with information need “... add subtitles to a music video ... got off youtube ...download for this ...”, a retrieved question “How would i add a music file to a video clip. ...” was highly recommended by TFIDF approach as predicted information need contained ‘youtube’, ‘video’, ‘music’, ‘download’, ... . The MRR score received an improvement from 92.5% to 95.8% in the ‘Test c’ and from 91.8% to 96.2% in ‘Test t’. This means that the top one question recommended by our methods can be quite well catering to the users’ information needs. The top five precision and the top ten precision scores using TFIDF and LDA1 methods also received different degrees of improvement. Thus, we can improve the performance of question recommendation by predicting information needs. 6 Conclusions In this paper we addressed the problem of recommending questions from large archives of community question answering data based on users’ information needs. We also utilized a translation model and a LDA topic model to predict the information need only given the user’s query question. Different information need similarity measures were compared to prove that it is possible to satisfy user’s information need by recommending questions from large archives of community QA. The Latent Dirichlet allocation based approach was proved to perform better on measuring the similarity between short Q1: Please recommend A good website for Custom Built Computer parts? InfoN custom, site, ram, recommend, price, motherboard, gaming, ... Q2: What is the best laptop for a college student? InfoN know, brand, laptop, college, buy, price, dell, mac, ... Q3: What is the best Florida beach for a honeymoon? InfoN Florida, beach, honeymoon, wedding, surf, cheap, fun, ... Q4: Are there any good clubs in Manchester InfoN club, bar, Manchester, music, age, fun, drink, dance, ... Q5: If i buy a video card for my computer will that make it faster? InfoN nvidia, video, ati, youtube, card, buy, window, slow, computer, graphics, geforce, faster, ... Q6: If I buy a bigger hard drive for my laptop, will it make my computer run faster or just increase the memory? InfoN laptop, ram, run, buy, bigger, memory, computer, increase, gb, hard, drive, faster, ... Q7: Is there a way I can make my computer work faster rather than just increasing the ram or harware space? InfoN space, speed, ram, hardware, main, gig, slow, computer, increase, work, gb, faster, ... Table 5: Information need prediction examples using IBM-4 translation model 1432 texts in the semantic level than traditional methods. Experiments showed that the proposed translation based language model for question information need prediction further enhanced the performance of question recommendation methods. References Ricardo A. Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, Robert L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, v.19 n.2, June 1993. Razvan Bunescu and Yunfeng Huang. 2010. Learning the Relative Usefulness of Questions in Community QA. Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) , Cambridge, MA. Robin D. Burke and Kristian J. Hammond and Vladimir A. Kulyukin and Steven L. Lytinen and Noriko Tomuro and Scott Schoenberg. 1997. Question answering from frequently-asked question files: Experiences with the FAQ Finder system. AI Magazine, 18, 57C66. Yunbo Cao, Huizhong Duan, Chin-Yew Lin, Yong Yu, and Hsiao-Wuen Hon. 2008. Recommending Questions Using the MDL-based Tree Cut Model. In: Proc. of the 17th Int. Conf. on World Wide Web, pp. 81-90. Asli Celikyilmaz and Dilek Hakkani-Tur and Gokhan Tur. 2010. LDA Based Similarity Modeling for Question Answering. In NAACL 2010 C Workshop on Semantic Search. Charibeth Cheng, Cedric Paul Alberto, Ian Anthony Chan, and Vazir Joshua Querol. 2008. SpellCheF: Spelling Checker and Corrector for Filipino. Journal of Research in Science, Computing and Engineering, North America, 4, sep. 2008. Lynn Silipigni Connaway and Chandra Prabha. 2005. An overview of the IMLS Project “Sense-making the information confluence: The whys and hows of college and university user satisficing of information needs”. Presented at Library of Congress Forum, American Library Association Midwinter Conference, Boston, MA, Jan 16, 2005. Huizhong Duan, Yunbo Cao, Chin-Yew Lin, and Yong Yu. 2008. Searching questions by identifying question topic and question focus. In HLT-ACL, pages 156C164. Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. Natl Acad Sci 101:5228C5235. Jiwoon Jeon, W. Bruce Croft and Joon Ho Lee. 2005a. Finding semantically similar questions based on their answers. In Proc. of SIGIR05. Jiwoon Jeon, W. Bruce Croft and Joon Ho Lee. 2005b. Finding similar questions in large question and answer archives. In CIKM, pages 84C90. Jay J. Jiang and David W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of International Conference on Research in Computational Linguistics, Taiwan. Dekang Lin. 1998. An Information-Theoretic Definition of Similarity. In Proceedings of the Fifteenth International Conference on Machine Learning (ICML ’98), Jude W. Shavlik (Ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 296-304. Yandong Liu, Jiang Bian, and Eugene Agichtein. 2008. Predicting information seeker satisfaction in community question answering. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR ’08). ACM, New York, NY, USA, 483-490. Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceedings of the 21st national conference on Artificial intelligence (AAAI ’06), pages 775C780. AAAI Press. Raphael Mudge. 2010. The design of a proofreading software service. In Proceedings of the NAACL HLT 2010 Workshop on Computational Linguistics and Writing: Writing Processes and Authoring Aids (CL&W ’10). Association for Computational Linguistics, Morristown, NJ, USA, 24-32. Franz Josef Och, Hermann Ney. 2000. A comparison of alignment models for statistical machine translation. Proceedings of the 18th conference on Computational linguistics, July 31-August 04, Saarbrucken, Germany. Franz Josef Och, Hermann Ney. 2003.A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, volume 29, number 1, pp. 1951 March 2003. Jahna Otterbacher, Gunes Erkan, Dragomir R. Radev. 2009. Biased LexRank: Passage retrieval using random walks with question-based priors. Information Processing and Management: an International Journal, v.45 n.1, p.42-54, January, 2009. Chandra Prabha, Lynn Silipigni Connaway, Lawrence Olszewski, Lillie R. Jenkins. 2007. What is enough? Satisficing information needs. Journal of Documentation (January, 63,1). Ellen Voorhees and Dawn Tice. 2000. The TREC-8 question answering track evaluation. In Text Retrieval Conference TREC-8, Gaithersburg, MD. Kai Wang, Yanming Zhao, and Tat-Seng Chua. 2009. A syntactic tree matching approach to finding similar 1433 questions in community-based qa services. In SIGIR, pages 187C194. Kai Wang and Tat-Seng Chua. 2010. Exploiting salient patterns for question detection and question retrieval in community-based question answering. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING ’10). Association for Computational Linguistics, Stroudsburg, PA, USA, 1155-1163. Hu Wu, Yongji Wang, and Xiang Cheng. 2008. Incremental probabilistic latent semantic analysis for automatic question recommendation. In RecSys. Xiaobing Xue, Jiwoon Jeon, W. Bruce Croft. 2008. Retrieval models for question and answer archives. In SIGIR’08, pages 475C482. ACM. Shiqi Zhao, Ming Zhou, and Ting Liu. 2007. Learning Question Paraphrases for QA from Encarta Logs. In Proceedings of International Joint Conferences on Artificial Intelligence (IJCAI), pages 1795-1800. 1434
2011
143
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1435–1444, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Semi-Supervised Frame-Semantic Parsing for Unknown Predicates Dipanjan Das and Noah A. Smith Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA {dipanjan,nasmith}@cs.cmu.edu Abstract We describe a new approach to disambiguating semantic frames evoked by lexical predicates previously unseen in a lexicon or annotated data. Our approach makes use of large amounts of unlabeled data in a graph-based semi-supervised learning framework. We construct a large graph where vertices correspond to potential predicates and use label propagation to learn possible semantic frames for new ones. The label-propagated graph is used within a frame-semantic parser and, for unknown predicates, results in over 15% absolute improvement in frame identification accuracy and over 13% absolute improvement in full frame-semantic parsing F1 score on a blind test set, over a state-of-the-art supervised baseline. 1 Introduction Frame-semantic parsing aims to extract a shallow semantic structure from text, as shown in Figure 1. The FrameNet lexicon (Fillmore et al., 2003) is a rich linguistic resource containing expert knowledge about lexical and predicate-argument semantics. The lexicon suggests an analysis based on the theory of frame semantics (Fillmore, 1982). Recent approaches to frame-semantic parsing have broadly focused on the use of two statistical classifiers corresponding to the aforementioned subtasks: the first one to identify the most suitable semantic frame for a marked lexical predicate (target, henceforth) in a sentence, and the second for performing semantic role labeling (SRL) given the frame. The FrameNet lexicon, its exemplar sentences containing instantiations of semantic frames, and full-text annotations provide supervision for learning frame-semantic parsers. Yet these annotations lack coverage, including only 9,300 annotated target types. Recent papers have tried to address the coverage problem. Johansson and Nugues (2007) used WordNet (Fellbaum, 1998) to expand the list of targets that can evoke frames and trained classifiers to identify the best-suited frame for the newly created targets. In past work, we described an approach where latent variables were used in a probabilistic model to predict frames for unseen targets (Das et al., 2010a).1 Relatedly, for the argument identification subtask, Matsubayashi et al. (2009) proposed a technique for generalization of semantic roles to overcome data sparseness. Unseen targets continue to present a major obstacle to domain-general semantic analysis. In this paper, we address the problem of idenfifying the semantic frames for targets unseen either in FrameNet (including the exemplar sentences) or the collection of full-text annotations released along with the lexicon. Using a standard model for the argument identification stage (Das et al., 2010a), our proposed method improves overall frame-semantic parsing, especially for unseen targets. To better handle these unseen targets, we adopt a graph-based semi-supervised learning stategy (§4). We construct a large graph over potential targets, most of which 1Notwithstanding state-of-the-art results, that approach was only able to identify the correct frame for 1.9% of unseen targets in the test data available at that time. That system achieves about 23% on the test set used in this paper. 1435 bell.n ring.v there be.v enough.a LU NOISE_MAKERS SUFFICIENCY Frame EXISTENCE CAUSE_TO_MAKE_NOISE . bells N_m more than six of the eight Sound_maker Enabled_situation ring to ringers Item enough Entity Agent n't are still there But Figure 1: An example sentence from the PropBank section of the full-text annotations released as part of FrameNet 1.5. Each row under the sentence correponds to a semantic frame and its set of corresponding arguments. Thick lines indicate targets that evoke frames; thin solid/dotted lines with labels indicate arguments. N m under “bells” is short for the Noise maker role of the NOISE MAKERS frame. are drawn from unannotated data, and a fraction of which come from seen FrameNet annotations. Next, we perform label propagation on the graph, which is initialized by frame distributions over the seen targets. The resulting smoothed graph consists of posterior distributions over semantic frames for each target in the graph, thus increasing coverage. These distributions are then evaluated within a frame-semantic parser (§5). Considering unseen targets in test data (although few because the test data is also drawn from the training domain), significant absolute improvements of 15.7% and 13.7% are observed for frame identification and full framesemantic parsing, respectively, indicating improved coverage for hitherto unobserved predicates (§6). 2 Background Before going into the details of our model, we provide some background on two topics relevant to this paper: frame-semantic parsing and graph-based learning applied to natural language tasks. 2.1 Frame-semantic Parsing Gildea and Jurafsky (2002) pioneered SRL, and since then there has been much applied research on predicate-argument semantics. Early work on frame-semantic role labeling made use of the exemplar sentences in the FrameNet corpus, each of which is annotated for a single frame and its arguments (Thompson et al., 2003; Fleischman et al., 2003; Shi and Mihalcea, 2004; Erk and Pad´o, 2006, inter alia). Most of this work was done on an older, smaller version of FrameNet. Recently, since the release of full-text annotations in SemEval’07 (Baker et al., 2007), there has been work on identifying multiple frames and their corresponding sets of arguments in a sentence. The LTH system of Johansson and Nugues (2007) performed the best in the SemEval’07 shared task on frame-semantic parsing. Our probabilistic frame-semantic parser outperforms LTH on that task and dataset (Das et al., 2010a). The current paper builds on those probabilistic models to improve coverage on unseen predicates.2 Expert resources have limited coverage, and FrameNet is no exception. Automatic induction of semantic resources has been a major effort in recent years (Snow et al., 2006; Ponzetto and Strube, 2007, inter alia). In the domain of frame semantics, previous work has sought to extend the coverage of FrameNet by exploiting resources like VerbNet, WordNet, or Wikipedia (Shi and Mihalcea, 2005; Giuglea and Moschitti, 2006; Pennacchiotti et al., 2008; Tonelli and Giuliano, 2009), and projecting entries and annotations within and across languages (Boas, 2002; Fung and Chen, 2004; Pad´o and Lapata, 2005). Although these approaches have increased coverage to various degrees, they rely on other lexicons and resources created by experts. F¨urstenau and Lapata (2009) proposed the use of unlabeled data to improve coverage, but their work was limited to verbs. Bejan (2009) used self-training to improve frame identification and reported improvements, but did not explicitly model unknown targets. In contrast, we use statistics gathered from large volumes of unlabeled data to improve the coverage of a frame-semantic parser on several syntactic categories, in a novel framework that makes use of graph-based semi-supervised learning. 2SEMAFOR, the system presented by Das et al. (2010a) is publicly available at http://www.ark.cs.cmu.edu/ SEMAFOR and has been extended in this work. 1436 2.2 Graph-based Semi-Supervised Learning In graph-based semi-supervised learning, one constructs a graph whose vertices are labeled and unlabeled examples. Weighted edges in the graph, connecting pairs of examples/vertices, encode the degree to which they are expected to have the same label (Zhu et al., 2003). Variants of label propagation are used to transfer labels from the labeled to the unlabeled examples. There are several instances of the use of graph-based methods for natural language tasks. Most relevant to our work an approach to word-sense disambiguation due to Niu et al. (2005). Their formulation was transductive, so that the test data was part of the constructed graph, and they did not consider predicate-argument analysis. In contrast, we make use of the smoothed graph during inference in a probabilistic setting, in turn using it for the full frame-semantic parsing task. Recently, Subramanya et al. (2010) proposed the use of a graph over substructures of an underlying sequence model, and used a smoothed graph for domain adaptation of part-of-speech taggers. Subramanya et al.’s model was extended by Das and Petrov (2011) to induce part-of-speech dictionaries for unsupervised learning of taggers. Our semi-supervised learning setting is similar to these two lines of work and, like them, we use the graph to arrive at better final structures, in an inductive setting (i.e., where a parametric model is learned and then separately applied to test data, following most NLP research). 3 Approach Overview Our overall approach to handling unobserved targets consists of four distinct stages. Before going into the details of each stage individually, we provide their overview here: Graph Construction: A graph consisting of vertices corresponding to targets is constructed using a combination of frame similarity (for observed targets) and distributional similarity as edge weights. This stage also determines a fixed set of nearest neighbors for each vertex in the graph. Label Propagation: The observed targets (a small subset of the vertices) are initialized with empirical frame distributions extracted from FrameNet annotations. Label propagation results in a distribution of frames for each vertex in the graph. Supervised Learning: Frame identification and argument identification models are trained following Das et al. (2010a). The graph is used to define the set of candidate frames for unseen targets. Parsing: The frame identification model of Das et al. disambiguated among only those frames associated with a seen target in the annotated data. For an unseen target, all frames in the FrameNet lexicon were considered (a large number). The current work replaces that strategy, considering only the top M frames in the distribution produced by label propagation. This strategy results in large improvements in frame identification for the unseen targets and makes inference much faster. Argument identification is done exactly like Das et al. (2010a). 4 Semi-Supervised Learning We perform semi-supervised learning by constructing a graph of vertices representing a large number of targets, and learn frame distributions for those which were not observed in FrameNet annotations. 4.1 Graph Construction We construct a graph with targets as vertices. For us, each target corresponds to a lemmatized word or phrase appended with a coarse POS tag, and it resembles the lexical units in the FrameNet lexicon. For example, two targets corresponding to the same lemma would look like boast.N and boast.V. Here, the first target is a noun, while the second is a verb. An example multiword target is chemical weapon.N. We use two resources for graph construction. First, we take all the words and phrases present in the dependency-based thesaurus constructed using syntactic cooccurrence statistics (Lin, 1998).3 To construct this resource, a corpus containing 64 million words was parsed with a fast dependency parser (Lin, 1993; Lin, 1994), and syntactic contexts were used to find similar lexical items for a given word 3This resource is available at http://webdocs.cs. ualberta.ca/˜lindek/Downloads/sim.tgz 1437 difference.N similarity.N discrepancy.N resemble.V disparity.N resemblance.N inequality.N variant.N divergence.N poverty.N homelessness.N wealthy.A rich.A deprivation.N destitution.N joblessness.N unemployment.N employment.N unemployment rate.N powerlessness.N UNEMPLOYMENT_RATE UNEMPLOYMENT_RATE UNEMPLOYMENT_RATE POVERTY POVERTY POVERTY SIMILARITY SIMILARITY SIMILARITY SIMILARITY SIMILARITY Figure 2: Excerpt from a graph over targets. Green targets are observed in the FrameNet data. Above/below them are shown the most frequently observed frame that these targets evoke. The black targets are unobserved and label propagation produces a distribution over most likely frames that they could evoke. or phrase. Lin separately treated nouns, verbs and adjectives/adverbs and the thesaurus contains three parts for each of these categories. For each item in the thesaurus, 200 nearest neighbors are listed with a symmetric similarity score between 0 and 1. We processed this thesaurus in two ways: first, we lowercased and lemmatized each word/phrase and merged entries which shared the same lemma; second, we separated the adjectives and adverbs into two lists from Lin’s original list by scanning a POS-tagged version of the Gigaword corpus (Graff, 2003) and categorizing each item into an adjective or an adverb depending on which category the item associated with more often in the data. The second step was necessary because FrameNet treats adjectives and adverbs separately. At the end of this processing step, we were left with 61,702 units—approximately six times more than the targets found in FrameNet annotations—each labeled with one of 4 coarse tags. We considered only the top 20 most similar targets for each target, and noted Lin’s similarity between two targets t and u, which we call simDL(t, u). The second component of graph construction comes from FrameNet itself. We scanned the exemplar sentences in FrameNet 1.54 and the training section of the full-text annotations that we use to train the probabilistic frame parser (see §6.1), and gathered a distribution over frames for each target. For a pair of targets t and u, we measured the Euclidean distance5 between their frame distributions. This distance was next converted to a similarity score, namely, simFN(t, u) between 0 and 1 by subtracting each one from the maximum distance found in 4http://framenet.icsi.berkeley.edu 5This could have been replaced by an entropic distance metric like KL- or JS-divergence, but we leave that exploration to future work. the whole data, followed by normalization. Like simDL(t, u), this score is symmetric. This resulted in 9,263 targets, and again for each, we considered the 20 most similar targets. Finally, the overall similarity between two given targets t and u was computed as: sim(t, u) = α · simFN(t, u) + (1 −α) · simDL(t, u) Note that this score is symmetric because its two components are symmetric. The intuition behind taking a linear combination of the two types of similarity functions is as follows. We hope that distributionally similar targets would have the same semantic frames because ideally, lexical units evoking the same set of frames appear in similar syntactic contexts. We would also like to involve the annotated data in graph construction so that it can eliminate some noise in the automatically constructed thesaurus.6 Let K(t) denote the K most similar targets to target t, under the score sim. We link vertices t and u in the graph with edge weight wtu, defined as: wtu = ( sim(t, u) if t ∈K(u) or u ∈K(t) 0 otherwise (1) The hyperparameters α and K are tuned by crossvalidation (§6.3). 4.2 Label Propagation First, we softly label those vertices of the constructed graph for which frame distributions are available from the FrameNet data (the same distributions that are used to compute simFN). Thus, initially, a small fraction of the vertices in the graph 6In future work, one might consider learning a similarity metric from the annotated data, so as to exactly suit the frame identification task. 1438 have soft frame labels on them. Figure 2 shows an excerpt from a constructed graph. For simplicity, only the most probable frames under the empirical distribution for the observed targets are shown; we actually label each vertex with the full empirical distribution over frames for the corresponding observed target in the data. The dotted lines demarcate parts of the graph that associate with different frames. Label propagation helps propagate the initial soft labels throughout the graph. To this end, we use a variant of the quadratic cost criterion of Bengio et al. (2006), also used by Subramanya et al. (2010) and Das and Petrov (2011).7 Let V denote the set of all vertices in the graph, Vl ⊂V be the set of known targets and F denote the set of all frames. Let N(t) denote the set of neighbors of vertex t ∈V . Let q = {q1, q2, . . . , q|V |} be the set of frame distributions, one per vertex. For each known target t ∈Vl, we have an initial frame distribution rt. For every edge in the graph, weights are defined as in Eq. 1. We find q by solving: arg minq P t∈Vl∥rt −qt∥2 + µ P t∈V,u∈N(t) wtu∥qt −qu∥2 + ν P t∈V ∥qt − 1 |F|∥2 s.t. ∀t ∈V, P f∈F qt(f) = 1 ∀t ∈V, f ∈F, qt(f) ≥0 (2) We use a squared loss to penalize various pairs of distributions over frames: ∥a−b∥2 = P f∈F(a(f)− b(f))2. The first term in Eq. 2 requires that, for known targets, we stay close to the initial frame distributions. The second term is the graph smoothness regularizer, which encourages the distributions of similar nodes (large wtu) to be similar. The final term is a regularizer encouraging all distributions to be uniform to the extent allowed by the first two terms. (If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that its converged marginal will be uniform over all frames.) µ and ν are hyperparameters whose choice we discuss in §6.3. Note that Eq. 2 is convex in q. While it is possible to derive a closed form solution for this objective 7Instead of a quadratic cost, an entropic distance measure could have been used, e.g., KL-divergence, considered by Subramanya and Bilmes (2009). We do not explore that direction in the current paper. function, it would require the inversion of a |V |×|V | matrix. Hence, like Subramanya et al. (2010), we employ an iterative method with updates defined as: γt(f) ← rt(f)1{t ∈Vl} (3) + µ X u∈N(t) wtuq(m−1) u (f) + ν |F| κt ← 1{t ∈Vl} + ν + µ X u∈N(t) wtu (4) q(m) t (f) ← γt(f)/κt (5) Here, 1{·} is an indicator function. The iterative procedure starts with a uniform distribution for each q(0) t . For all our experiments, we run 10 iterations of the updates. The final distribution of frames for a target t is denoted by q∗ t . 5 Learning and Inference for Frame-Semantic Parsing In this section, we briefly review learning and inference techniques used in the frame-semantic parser, which are largely similar to Das et al. (2010a), except the handling of unknown targets. Note that in all our experiments, we assume that the targets are marked in a given sentence of which we want to extract a frame-semantic analysis. Therefore, unlike the systems presented in SemEval’07, we do not define a target identification module. 5.1 Frame Identification For a given sentence x with frame-evoking targets t, let ti denote the ith target (a word sequence). We seek a list f = ⟨f1, . . . , fm⟩of frames, one per target. Let L be the set of targets found in the FrameNet annotations. Let Lf ⊆L be the subset of these targets annotated as evoking a particular frame f. The set of candidate frames Fi for ti is defined to include every frame f such that ti ∈Lf. If ti ̸∈L (in other words, ti is unseen), then Das et al. (2010a) considered all frames F in FrameNet as candidates. Instead, in our work, we check whether ti ∈V , where V are the vertices of the constructed graph, and set: Fi = {f : f ∈M-best frames under q∗ ti} (6) The integer M is set using cross-validation (§6.3). If ti ̸∈V , then all frames F are considered as Fi. 1439 The frame prediction rule uses a probabilistic model over frames for a target: fi ←arg maxf∈Fi P ℓ∈Lf p(f, ℓ| ti, x) (7) Note that a latent variable ℓ∈Lf is used, which is marginalized out. Broadly, lexical semantic relationships between the “prototype” variable ℓ(belonging to the set of seen targets for a frame f) and the target ti are used as features for frame identification, but since ℓis unobserved, it is summed out both during inference and training. A conditional log-linear model is used to model this probability: for f ∈Fi and ℓ∈Lf, pθ(f, ℓ| ti, x) = exp θ⊤g(f, ℓ, ti, x) P f′∈Fi P ℓ′∈Lf′ exp θ⊤g(f′, ℓ′, ti, x) (8) where θ are the model weights, and g is a vectorvalued feature function. This discriminative formulation is very flexible, allowing for a variety of (possibly overlapping) features; e.g., a feature might relate a frame f to a prototype ℓ, represent a lexicalsemantic relationship between ℓand ti, or encode part of the syntax of the sentence (Das et al., 2010b). Given some training data, which is of the form ⟨x(j), t(j), f(j), A(j)⟩ N j=1 (where N is the number of sentences in the data and A is the set of argument in a sentence), we discriminatively train the frame identification model by maximizing the following log-likelihood:8 max θ N X j=1 mj X i=1 log X ℓ∈L f(j) i pθ(f(j) i , ℓ| t(j) i , x(j)) (9) This non-convex objective function is locally optimized using a distributed implementation of LBFGS (Liu and Nocedal, 1989).9 5.2 Argument Identification Given a sentence x = ⟨x1, . . . , xn⟩, the set of targets t = ⟨t1, . . . , tm⟩, and a list of evoked frames 8We found no benefit from using an L2 regularizer. 9While training, in the partition function of the log-linear model, all frames F in FrameNet are summed up for a target ti instead of only Fi (as in Eq. 8), to learn interactions between the latent variables and different sentential contexts. f = ⟨f1, . . . , fm⟩corresponding to each target, argument identification or SRL is the task of choosing which of each fi’s roles are filled, and by which parts of x. We directly adopt the model of Das et al. (2010a) for the argument identification stage and briefly describe it here. Let Rfi = {r1, . . . , r|Rfi|} denote frame fi’s roles observed in FrameNet annotations. A set S of spans that are candidates for filling any role r ∈Rfi are identified in the sentence. In principle, S could contain any subsequence of x, but we consider only the set of contiguous spans that (a) contain a single word or (b) comprise a valid subtree of a word and all its descendants in a dependency parse. The empty span is also included in S, since some roles are not explicitly filled. During training, if an argument is not a valid subtree of the dependency parse (this happens due to parse errors), we add its span to S. Let Ai denote the mapping of roles in Rfi to spans in S. The model makes a prediction for each Ai(rk) (for all roles rk ∈Rfi): Ai(rk) ←arg maxs∈S p(s | rk, fi, ti, x) (10) A conditional log-linear model over spans for each role of each evoked frame is defined as: pψ(Ai(rk) = s | fi, ti, x) = (11) exp ψ⊤h(s, rk, fi, ti, x) P s′∈S exp ψ⊤h(s′, rk, fi, ti, x) This model is trained by optimizing: max ψ N X j=1 mj X i=1 |R f(j) i | X k=1 log pψ(A(j) i (rk) | f(j) i , t(j) i , x(j)) This objective function is convex, and we globally optimize it using the distributed implementation of L-BFGS. We regularize by including −1 10∥ψ∥2 2 in the objective (the strength is not tuned). Na¨ıve prediction of roles using Equation 10 may result in overlap among arguments filling different roles of a frame, since the argument identification model fills each role independently of the others. We want to enforce the constraint that two roles of a single frame cannot be filled by overlapping spans. Hence, illegal overlap is disallowed using a 10,000hypothesis beam search. 1440 UNKNOWN TARGETS ALL TARGETS Model Exact Match Partial Match Exact Match Partial Match SEMAFOR 23.08 46.62 82.97 90.51 Self-training 18.88 42.67 82.45 90.19 LinGraph 36.36 59.47 83.40 90.93 FullGraph 39.86 62.35∗ 83.51 91.02∗ Table 1: Frame identification results in percentage accuracy on 4,458 test targets. Bold scores indicate significant improvements relative to SEMAFOR and (∗) denotes significant improvements over LinGraph (p < 0.05). 6 Experiments and Results Before presenting our experiments and results, we will describe the datasets used in our experiments, and the various baseline models considered. 6.1 Data We make use of the FrameNet 1.5 lexicon released in 2010. This lexicon is a superset of previous versions of FrameNet. It contains 154,607 exemplar sentences with one marked target and frame-role annotations. 78 documents with full-text annotations with multiple frames per sentence were also released (a superset of the SemEval’07 dataset). We randomly selected 55 of these documents for training and treated the 23 remaining ones as our test set. After scanning the exemplar sentences and the training data, we arrived at a set of 877 frames, 1,068 roles,10 and 9,263 targets. Our training split of the full-text annotations contained 3,256 sentences with 19,582 frame annotatations with corresponding roles, while the test set contained 2,420 sentences with 4,458 annotations (the test set contained fewer annotated targets per sentence). We also divide the 55 training documents into 5 parts for crossvalidation (see §6.3). The raw sentences in all the training and test documents were preprocessed using MXPOST (Ratnaparkhi, 1996) and the MST dependency parser (McDonald et al., 2005) following Das et al. (2010a). In this work we assume the frame-evoking targets have been correctly identified in training and test data. 10Note that the number of listed roles in the lexicon is nearly 9,000, but their number in actual annotations is a lot fewer. 6.2 Baselines We compare our model with three baselines. The first baseline is the purely supervised model of Das et al. (2010a) trained on the training split of 55 documents. Note that this is the strongest baseline available for this task;11 we refer to this model as “SEMAFOR.” The second baseline is a semi-supervised selftrained system, where we used SEMAFOR to label 70,000 sentences from the Gigaword corpus with frame-semantic parses. For finding targets in a raw sentence, we used a relaxed target identification scheme, where we marked every target seen in the lexicon and all other words which were not prepositions, particles, proper nouns, foreign words and Wh-words as potential frame evoking units. This was done so as to find unseen targets and get frame annotations with SEMAFOR on them. We appended these automatic annotations to the training data, resulting in 711,401 frame annotations, more than 36 times the supervised data. These data were next used to train a frame identification model (§5.1).12 This setup is very similar to Bejan (2009) who used selftraining to improve frame identification. We refer to this model as “Self-training.” The third baseline uses a graph constructed only with Lin’s thesaurus, without using supervised data. In other words, we followed the same scheme as in §4.1 but with the hyperparameter α = 0. Next, label propagation was run on this graph (and hyperparameters tuned using cross validation). The posterior distribution of frames over targets was next used for frame identification (Eq. 6-7), with SEMAFOR as the trained model. This model, which is very similar to our full model, is referred to as “LinGraph.” “FullGraph” refers to our full system. 6.3 Experimental Setup We used five-fold cross-validation to tune the hyperparameters α, K, µ, and M in our model. The 11We do not compare our model with other systems, e.g. the ones submitted to SemEval’07 shared task, because SEMAFOR outperforms them significantly (Das et al., 2010a) on the previous version of the data. Moreover, we trained our models on the new FrameNet 1.5 data, and training code for the SemEval’07 systems was not readily available. 12Note that we only self-train the frame identification model and not the argument identification model, which is fixed throughout. 1441 UNKNOWN TARGETS ALL TARGETS Model Exact Match Partial Match Exact Match Partial Match P R F1 P R F1 P R F1 P R F1 SEMAFOR 19.59 16.48 17.90 33.03 27.80 30.19 66.15 61.64 63.82 70.68 65.86 68.18 Self-training 15.44 13.00 14.11 29.08 24.47 26.58 65.78 61.30 63.46 70.39 65.59 67.90 LinGraph 29.74 24.88 27.09 44.08 36.88 40.16 66.43 61.89 64.08 70.97 66.13 68.46 FullGraph 35.27∗28.84∗31.74∗48.81∗39.91∗43.92∗66.59∗62.01∗64.22∗71.11∗66.22∗68.58∗ Table 2: Full frame-semantic parsing precision, recall and F1 score on 2,420 test sentences. Bold scores indicate significant improvements relative to SEMAFOR and (∗) denotes significant improvements over LinGraph (p < 0.05). uniform regularization hyperparameter ν for graph construction was set to 10−6 and not tuned. For each cross-validation split, four folds were used to train a frame identification model, construct a graph, run label propagation and then the model was tested on the fifth fold. This was done for all hyperparameter settings, which were α ∈{0.2, 0.5, 0.8}, K ∈{5, 10, 15, 20}, µ ∈{0.01, 0.1, 0.3, 0.5, 1.0} and M ∈{2, 3, 5, 10}. The joint setting which performed the best across five-folds was α = 0.2, K = 10, µ = 1.0, M = 2. Similar tuning was also done for the baseline LinGraph, where α was set to 0, and rest of the hyperparameters were tuned (the selected hyperparameters were K = 10, µ = 0.1 and M = 2). With the chosen set of hyperparameters, the test set was used to measure final performance. The standard evaluation script from the SemEval’07 task calculates precision, recall, and F1score for frames and arguments; it also provides a score that gives partial credit for hypothesizing a frame related to the correct one in the FrameNet lexicon. We present precision, recall, and F1-measure microaveraged across the test documents, report labels-only matching scores (spans must match exactly), and do not use named entity labels. This evaluation scheme follows Das et al. (2010a). Statistical significance is measured using a reimplementation of Dan Bikel’s parsing evaluation comparator.13 6.4 Results Tables 1 and 2 present results for frame identification and full frame-semantic parsing respectively. They also separately tabulate the results achieved for unknown targets. Our full model, denoted by “FullGraph,” outperforms all the baselines for both tasks. Note that the Self-training model even falls 13http://www.cis.upenn.edu/˜dbikel/ software.html#comparator short of the supervised baseline SEMAFOR, unlike what was observed by Bejan (2009) for the frame identification task. The model using a graph constructed solely from the thesaurus (LinGraph) outperforms both the supervised and the self-training baselines for all tasks, but falls short of the graph constructed using the similarity metric that is a linear combination of distributional similarity and supervised frame similarity. This indicates that a graph constructed with some knowledge of the supervised data is more powerful. For unknown targets, the gains of our approach are impressive: 15.7% absolute accuracy improvement over SEMAFOR for frame identification, and 13.7% absolute F1 improvement over SEMAFOR for full frame-semantic parsing (both significant). When all the test targets are considered, the gains are still significant, resulting in 5.4% relative error reduction over SEMAFOR for frame identification, and 1.3% relative error reduction over SEMAFOR for full-frame semantic parsing. Although these improvements may seem modest, this is because only 3.2% of the test set targets are unseen in training. We expect that further gains would be realized in different text domains, where FrameNet coverage is presumably weaker than in news data. A semi-supervised strategy like ours is attractive in such a setting, and future work might explore such an application. Our approach also makes decoding much faster. For the unknown component of the test set, SEMAFOR takes a total 111 seconds to find the best set of frames, while the FullGraph model takes only 19 seconds to do so, thus bringing disambiguation time down by a factor of nearly 6. This is because our model now disambiguates between only M = 2 frames instead of the full set of 877 frames in FrameNet. For the full test set too, the speedup 1442 t = discrepancy.N t = contribution.N t = print.V t = mislead.V f q∗ t (f) f q∗ t (f) f q∗ t (f) f q∗ t (f) ∗SIMILARITY 0.076 ∗GIVING 0.167 ∗TEXT CREATION 0.081 EXPERIENCER OBJ 0.152 NATURAL FEATURES 0.066 MONEY 0.046 SENDING 0.054 ∗PREVARICATION 0.130 PREVARICATION 0.012 COMMITMENT 0.046 DISPERSAL 0.054 MANIPULATE INTO DOING 0.046 QUARRELING 0.007 ASSISTANCE 0.040 READING 0.042 COMPLIANCE 0.041 DUPLICATION 0.007 EARNINGS AND LOSSES 0.024 STATEMENT 0.028 EVIDENCE 0.038 Table 3: Top 5 frames according to the graph posterior distribution q∗ t (f) for four targets: discrepancy.N, contribution.N, print.V and mislead.V. None of these targets were present in the supervised FrameNet data. ∗marks the correct frame, according to the test data. EXPERIENCER OBJ is described in FrameNet as “Some phenomenon (the Stimulus) provokes a particular emotion in an Experiencer.” is noticeable, as SEMAFOR takes 131 seconds for frame identification, while the FullGraph model only takes 39 seconds. 6.5 Discussion The following is an example from our test set showing SEMAFOR’s output (for one target): REASON Discrepancies discrepancy.N between North Korean declarations and IAEA inspection findingsAction indicate that North Korea might have reprocessed enough plutonium for one or two nuclear weapons. Note that the model identifies an incorrect frame REASON for the target discrepancy.N, in turn identifying the wrong semantic role Action for the underlined argument. On the other hand, the FullGraph model exactly identifies the right semantic frame, SIMILARITY, as well as the correct role, Entities. This improvement can be easily explained. The excerpt from our constructed graph in Figure 2 shows the same target discrepancy.N in black, conveying that it did not belong to the supervised data. However, it is connected to the target difference.N drawn from annotated data, which evokes the frame SIMILARITY. Thus, after label propagation, we expect the frame SIMILARITY to receive high probability for the target discrepancy.N. Table 3 shows the top 5 frames that are assigned the highest posterior probabilities in the distribution q∗ t for four hand-selected test targets absent in supervised data, including discrepancy.N. For all of them, the FullGraph model identifies the correct frames for all four words in the test data by ranking these frames in the top M = 2. LinGraph also gets all four correct, Self-training only gets print.V/TEXT CREATION, and SEMAFOR gets none. Across unknown targets, on average the M = 2 most common frames in the posterior distribution q∗ t found by FullGraph have q(∗) t (f) = 7 877, or seven times the average across all frames. This suggests that the graph propagation method is confident only in predicting the top few frames out of the whole possible set. Moreover, the automatically selected number of frames to extract per unknown target, M = 2, suggests that only a few meaningful frames were assigned to unknown predicates. This matches the nature of FrameNet data, where the average frame ambiguity for a target type is 1.20. 7 Conclusion We have presented a semi-supervised strategy to improve the coverage of a frame-semantic parsing model. We showed that graph-based label propagation and resulting smoothed frame distributions over unseen targets significantly improved the coverage of a state-of-the-art semantic frame disambiguation model to previously unseen predicates, also improving the quality of full framesemantic parses. The improved parser is available at http://www.ark.cs.cmu.edu/SEMAFOR. Acknowledgments We are grateful to Amarnag Subramanya for helpful discussions. We also thank Slav Petrov, Nathan Schneider, and the three anonymous reviewers for valuable comments. This research was supported by NSF grants IIS0844507, IIS-0915187 and TeraGrid resources provided by the Pittsburgh Supercomputing Center under NSF grant number TG-DBS110003. 1443 References C. Baker, M. Ellsworth, and K. Erk. 2007. SemEval2007 Task 19: frame semantic structure extraction. In Proc. of SemEval. C. A. Bejan. 2009. Learning Event Structures From Text. Ph.D. thesis, The University of Texas at Dallas. Y. Bengio, O. Delalleau, and N. Le Roux. 2006. Label propagation and quadratic criterion. In SemiSupervised Learning. MIT Press. H. C. Boas. 2002. Bilingual FrameNet dictionaries for machine translation. In Proc. of LREC. D. Das and S. Petrov. 2011. Unsupervised part-ofspeech tagging with bilingual graph-based projections. In Proc. of ACL-HLT. D. Das, N. Schneider, D. Chen, and N. A. Smith. 2010a. Probabilistic frame-semantic parsing. In Proc. of NAACL-HLT. D. Das, N. Schneider, D. Chen, and N. A. Smith. 2010b. SEMAFOR 1.0: A probabilistic framesemantic parser. Technical Report CMU-LTI-10-001, Carnegie Mellon University. K. Erk and S. Pad´o. 2006. Shalmaneser - a toolchain for shallow semantic parsing. In Proc. of LREC. C. Fellbaum, editor. 1998. WordNet: an electronic lexical database. MIT Press, Cambridge, MA. C. J. Fillmore, C. R. Johnson, and M. R.L. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16(3). C. J. Fillmore. 1982. Frame semantics. In Linguistics in the Morning Calm, pages 111–137. Hanshin Publishing Co., Seoul, South Korea. M. Fleischman, N. Kwon, and E. Hovy. 2003. Maximum entropy models for FrameNet classification. In Proc. of EMNLP. P. Fung and B. Chen. 2004. BiFrameNet: bilingual frame semantics resource construction by crosslingual induction. In Proc. of COLING. H. F¨urstenau and M. Lapata. 2009. Semi-supervised semantic role labeling. In Proc. of EACL. D. Gildea and D. Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3). A.-M. Giuglea and A. Moschitti. 2006. Shallow semantic parsing based on FrameNet, VerbNet and PropBank. In Proc. of ECAI 2006. D. Graff. 2003. English Gigaword. Linguistic Data Consortium. R. Johansson and P. Nugues. 2007. LTH: semantic structure extraction using nonprojective dependency trees. In Proc. of SemEval. D. Lin. 1993. Principle-based parsing without overgeneration. In Proc. of ACL. D. Lin. 1994. Principar–an efficient, broadcoverage, principle-based parser. In Proc. of COLING. D. Lin. 1998. Automatic retrieval and clustering of similar words. In Proc. of COLING-ACL. D. C. Liu and J. Nocedal. 1989. On the limited memory bfgs method for large scale optimization. Math. Programming, 45(3). Y. Matsubayashi, N. Okazaki, and J. Tsujii. 2009. A comparative study on generalization of semantic roles in FrameNet. In Proc. of ACL-IJCNLP. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proc. of ACL. Z.-Y. Niu, D.-H. Ji, and C. L. Tan. 2005. Word sense disambiguation using label propagation based semisupervised learning. In Proc. of ACL. S. Pad´o and M. Lapata. 2005. Cross-linguistic projection of role-semantic information. In Proc. of HLTEMNLP. M. Pennacchiotti, D. De Cao, R. Basili, D. Croce, and M. Roth. 2008. Automatic induction of FrameNet lexical units. In Proc. of EMNLP. S. P. Ponzetto and M. Strube. 2007. Deriving a large scale taxonomy from wikipedia. In Proc. of AAAI. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. of EMNLP. L. Shi and R. Mihalcea. 2004. An algorithm for open text semantic parsing. In Proc. of Workshop on Robust Methods in Analysis of Natural Language Data. L. Shi and R. Mihalcea. 2005. Putting pieces together: combining FrameNet, VerbNet and WordNet for robust semantic parsing. In Computational Linguistics and Intelligent Text Processing: Proc. of CICLing 2005. Springer-Verlag. R. Snow, D. Jurafsky, and A. Y. Ng. 2006. Semantic taxonomy induction from heterogenous evidence. In Proc. of COLING-ACL. A. Subramanya and J. A. Bilmes. 2009. Entropic graph regularization in non-parametric semi-supervised classification. In Proc. of NIPS. A. Subramanya, S. Petrov, and F. Pereira. 2010. Efficient Graph-based Semi-Supervised Learning of Structured Tagging Models. In Proc. of EMNLP. C. A. Thompson, R. Levy, and C. D. Manning. 2003. A generative model for semantic role labeling. In Proc. of ECML. S. Tonelli and C. Giuliano. 2009. Wikipedia as frame information repository. In Proc. of EMNLP. X. Zhu, Z. Ghahramani, and J. D. Lafferty. 2003. Semisupervised learning using gaussian fields and harmonic functions. In Proc. of ICML. 1444
2011
144
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1445–1455, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics A Bayesian Model for Unsupervised Semantic Parsing Ivan Titov Saarland University Saarbruecken, Germany [email protected] Alexandre Klementiev Johns Hopkins University Baltimore, MD, USA [email protected] Abstract We propose a non-parametric Bayesian model for unsupervised semantic parsing. Following Poon and Domingos (2009), we consider a semantic parsing setting where the goal is to (1) decompose the syntactic dependency tree of a sentence into fragments, (2) assign each of these fragments to a cluster of semantically equivalent syntactic structures, and (3) predict predicate-argument relations between the fragments. We use hierarchical PitmanYor processes to model statistical dependencies between meaning representations of predicates and those of their arguments, as well as the clusters of their syntactic realizations. We develop a modification of the MetropolisHastings split-merge sampler, resulting in an efficient inference algorithm for the model. The method is experimentally evaluated by using the induced semantic representation for the question answering task in the biomedical domain. 1 Introduction Statistical approaches to semantic parsing have recently received considerable attention. While some methods focus on predicting a complete formal representation of meaning (Zettlemoyer and Collins, 2005; Ge and Mooney, 2005; Mooney, 2007), others consider more shallow forms of representation (Carreras and M`arquez, 2005; Liang et al., 2009). However, most of this research has concentrated on supervised methods requiring large amounts of labeled data. Such annotated resources are scarce, expensive to create and even the largest of them tend to have low coverage (Palmer and Sporleder, 2010), motivating the need for unsupervised or semi-supervised techniques. Conversely, research in the closely related task of relation extraction has focused on unsupervised or minimally supervised methods (see, for example, (Lin and Pantel, 2001; Yates and Etzioni, 2009)). These approaches cluster semantically equivalent verbalizations of relations, often relying on syntactic fragments as features for relation extraction and clustering (Lin and Pantel, 2001; Banko et al., 2007). The success of these methods suggests that semantic parsing can also be tackled as clustering of syntactic realizations of predicate-argument relations. While a similar direction has been previously explored in (Swier and Stevenson, 2004; Abend et al., 2009; Lang and Lapata, 2010), the recent work of (Poon and Domingos, 2009) takes it one step further by not only predicting predicate-argument structure of a sentence but also assigning sentence fragments to clusters of semantically similar expressions. For example, for a pair of sentences on Figure 1, in addition to inducing predicate-argument structure, they aim to assign expressions “Steelers” and “the Pittsburgh team” to the same semantic class Steelers, and group expressions “defeated” and “secured the victory over”. Such semantic representation can be useful for entailment or question answering tasks, as an entailment model can abstract away from specifics of syntactic and lexical realization relying instead on the induced semantic representation. For example, the two sentences in Figure 1 have identical semantic representation, and therefore can be hypothesized to be equivalent. 1445 RavensdefeatedSteelers WinPrize dobj subj Ravens Steelers Winner Opponent RavenssecuredthevictoryoverthePittsburghteam Steelers WinPrize subj dobj pp_over Ravens Winner Opponent nmod Figure 1: An example of two different syntactic trees with a common semantic representation WinPrize(Ravens, Steelers). From the statistical modeling point of view, joint learning of predicate-argument structure and discovery of semantic clusters of expressions can also be beneficial, because it results in a more compact model of selectional preference, less prone to the data-sparsity problem (Zapirain et al., 2010). In this respect our model is similar to recent LDA-based models of selectional preference (Ritter et al., 2010; S´eaghdha, 2010), and can even be regarded as their recursive and non-parametric extension. In this paper, we adopt the above definition of unsupervised semantic parsing and propose a Bayesian non-parametric approach which uses hierarchical Pitman-Yor (PY) processes (Pitman, 2002) to model statistical dependencies between predicate and argument clusters, as well as distributions over syntactic and lexical realizations of each cluster. Our non-parametric model automatically discovers granularity of clustering appropriate for the dataset, unlike the parametric method of (Poon and Domingos, 2009) which have to perform model selection and use heuristics to penalize more complex models of semantics. Additional benefits generally expected from Bayesian modeling include the ability to encode prior linguistic knowledge in the form of hyperpriors and the potential for more reliable modeling of smaller datasets. More detailed discussion of relation between the Markov Logic Network (MLN) approach of (Poon and Domingos, 2009) and our non-parametric method is presented in Section 3. Hierarchical Pitman-Yor processes (or their special case, hierarchical Dirichlet processes) have previously been used in NLP, for example, in the context of syntactic parsing (Liang et al., 2007; Johnson et al., 2007). However, in all these cases the effective size of the state space (i.e., the number of sub-symbols in the infinite PCFG (Liang et al., 2007), or the number of adapted productions in the adaptor grammar (Johnson et al., 2007)) was not very large. In our case, the state space size equals the total number of distinct semantic clusters, and, thus, is expected to be exceedingly large even for moderate datasets: for example, the MLN model induces 18,543 distinct clusters from 18,471 sentences of the GENIA corpus (Poon and Domingos, 2009). This suggests that standard inference methods for hierarchical PY processes, such as Gibbs sampling, Metropolis-Hastings (MH) sampling with uniform proposals, or the structured mean-field algorithm, are unlikely to result in efficient inference: for example in standard Gibbs sampling all thousands of alternatives should be considered at each sampling move. Instead, we use a split-merge MH sampling algorithm, which is a standard and efficient inference tool for non-hierarchical PY processes (Jain and Neal, 2000; Dahl, 2003) but has not previously been used in hierarchical setting. We extend the sampler to include composition-decomposition of syntactic fragments in order to cluster fragments of variables size, as in the example Figure 1, and also include the argument role-syntax alignment move which attempts to improve mapping between semantic roles and syntactic paths for some fixed predicate. Evaluating unsupervised models is a challenging task. We evaluate our model both qualitatively, examining the revealed clustering of syntactic structures, and quantitatively, on a question answering task. In both cases, we follow (Poon and Domingos, 2009) in using the corpus of biomedical abstracts. Our model achieves favorable results significantly outperforming the baselines, including state-of-theart methods for relation extraction, and achieves scores comparable to those of the MLN model. The rest of the paper is structured as follows. Section 2 begins with a definition of the semantic parsing task. Sections 3 and 4 give background on the MLN model and the Pitman-Yor processes, respectively. In Sections 5 and 6, we describe our model and the inference method. Section 7 provides both qualitative and quantitative evaluation. Finally, ad1446 ditional related work is presented in Section 8. 2 Semantic Parsing In this section, we briefly define the unsupervised semantic parsing task and underlying aspects and assumptions relevant to our model. Unlike (Poon and Domingos, 2009), we do not use the lambda calculus formalism to define our task but rather treat it as an instance of frame-semantic parsing, or a specific type of semantic role labeling (Gildea and Jurafsky, 2002). The reason for this is two-fold: first, the frame semantics view is more standard in computational linguistics, sufficient to describe induced semantic representation and convenient to relate our method to the previous work. Second, lambda calculus is a considerably more powerful formalism than the predicate-argument structure used in frame semantics, normally supporting quantification and logical connectors (for example, negation and disjunction), neither of which is modeled by our model or in (Poon and Domingos, 2009). In frame semantics, the meaning of a predicate is conveyed by a frame, a structure of related concepts that describes a situation, its participants and properties (Fillmore et al., 2003). Each frame is characterized by a set of semantic roles (frame elements) corresponding to the arguments of the predicate. It is evoked by a frame evoking element (a predicate). The same frame can be evoked by different but semantically similar predicates: for example, both verbs “buy” and “purchase” evoke frame Commerce buy in FrameNet (Fillmore et al., 2003). The aim of the semantic role labeling task is to identify all of the frames evoked in a sentence and label their semantic role fillers. We extend this task and treat semantic parsing as recursive prediction of predicate-argument structure and clustering of argument fillers. Thus, parsing a sentence into this representation involves (1) decomposing the sentence into lexical items (one or more words), (2) assigning a cluster label (a semantic frame or a cluster of argument fillers) to every lexical item, and (3) predicting argument-predicate relations between the lexical items. This process is illustrated in Figure 1. For the leftmost example, the sentence is decomposed into three lexical items: “Ravens”, “defeated” and “Steelers”, and they are assigned to clusters Ravens, WinPrize and Steelers, respectively. Then Ravens and Steelers are selected as a Winner and an Opponent in the WinPrize frame. In this work, we define a joint model for the labeling and argument identification stages. Similarly to core semantic roles in FrameNet, semantic roles are treated as frame-specific in our model, as our model does not try to discover any correspondences between roles in different frames. As you can see from the above description, frames (which groups predicates with similar meaning such as the WinPrize frame in our example) and clusters of argument fillers (Ravens and Steelers) are treated in our definition in a similar way. For convenience, we will refer to both types of clusters as semantic classes.1 This definition of semantic parsing is closely related to a realistic relation extraction setting, as both clustering of syntactic forms of relations (or extraction patterns) and clustering of argument fillers for these relations is crucial for automatic construction of knowledge bases (Yates and Etzioni, 2009). In this paper, we make three assumptions. First, we assume that each lexical item corresponds to a subtree of the syntactic dependency graph of the sentence. This assumption is similar to the adjacency assumption in (Zettlemoyer and Collins, 2005), though ours may be more appropriate for languages with free or semi-free word order, where syntactic structures are inherently non-projective. Second, we assume that the semantic arguments are local in the dependency tree; that is, one lexical item can be a semantic argument of another one only if they are connected by an arc in the dependency tree. This is a slight simplification of the semantic role labeling problem but one often made. Thus, the argument identification and labeling stages consist of labeling each syntactic arc with a semantic role label. In comparison, the MLN model does not explicitly assume contiguity of lexical items and does not make this directionality assumption but their clustering algorithm uses initialization and clusterization moves such that the resulting model also obeys both of these constraints. Third, as in (Poon and Domingos, 2009), we do not model polysemy as we assume 1Semantic classes correspond to lambda-form clusters in (Poon and Domingos, 2009) terminology. 1447 that each syntactic fragment corresponds to a single semantic class. This is not a model assumption and is only used at inference as it reduces mixing time of the Markov chain. It is not likely to be restrictive for the biomedical domain studied in our experiments. As in some of the recent work on learning semantic representations (Eisenstein et al., 2009; Poon and Domingos, 2009), we assume that dependency structures are provided for every sentence. This assumption allows us to construct models of semantics not Markovian within a sequence of words (see for an example a model described in (Liang et al., 2009)), but rather Markovian within a dependency tree. Though we include generation of the syntactic structure in our model, we would not expect that this syntactic component would result in an accurate syntactic model, even if trained in a supervised way, as the chosen independence assumptions are oversimplistic. In this way, we can use a simple generative story and build on top of the recent success in syntactic parsing. 3 Relation to the MLN Approach The work of (Poon and Domingos, 2009) models joint probability of the dependency tree and its latent semantic representation using Markov Logic Networks (MLNs) (Richardson and Domingos, 2006), selecting parameters (weights of first-order clauses) to maximize the probability of the observed dependency structures. For each sentence, the MLN induces a Markov network, an undirected graphical model with nodes corresponding to ground atoms and cliques corresponding to ground clauses. The MLN is a powerful formalism and allows for modeling complex interaction between features of the input (syntactic trees) and latent output (semantic representation), however, unsupervised learning of semantics with general MLNs can be prohibitively expensive. The reason for this is that MLNs are undirected models and when learned to maximize likelihood of syntactically annotated sentences, they would require marginalization over semantic representation but also over the entire space of syntactic structures and lexical units. Given the complexity of the semantic parsing task and the need to tackle large datasets, even approximate methods are likely to be infeasible. In order to overcome this problem, (Poon and Domingos, 2009) group parameters and impose local normalization constraints within each group. Given these normalization constraints, and additional structural constraints satisfied by the model, namely that the clauses should be engineered in such a way that they induce treestructured graphs for every sentence, the parameters can be estimated by a variant of the EM algorithm. The class of such restricted MLNs is equivalent to the class of directed graphical models over the same set of random variables corresponding to fragments of syntactic and semantic structure. Given that the above constraints do not directly fit into the MLN methodology, we believe that it is more natural to regard their model as a directed model with an underlying generative story specifying how the semantic structure is generated and how the syntactic parse is drawn for this semantic structure. This view would facilitate understanding what kind of features can easily be integrated into the model, simplify application of non-parametric Bayesian techniques and expedite the use of inference techniques designed specifically for directed models. Our approach makes one step in this direction by proposing a non-parametric version of such generative model. 4 Hierarchical Pitman-Yor Processes The central component of our non-parametric Bayesian model are Pitman-Yor (PY) processes, which are a generalization of the Dirichlet processes (DPs) (Ferguson, 1973). We use PY processes to model distributions of semantic classes appearing as an argument of other semantic classes. We also use them to model distributions of syntactic realizations for each semantic class and distributions of syntactic dependency arcs for argument types. In this section we present relevant background on PY processes. For a more detailed consideration we refer the reader to (Teh et al., 2006). The Pitman-Yor process over a set S, denoted PY (α, β, H), is a stochastic process whose samples G0 constitute probability measures on partitions of S. In practice, we do not need to draw measures, as they can be analytically marginalized out. The conditional distribution of xj+1 given the previous j draws, with G0 marginalized out, follows (Black1448 well and MacQueen, 1973) xj+1|x1, . . . xj ∼ K X k=1 jk −β j+α δφk + Kβ + α j+α H. (1) where φ1, . . . , φK are K values assigned to x1, x2, . . . , xj. The number of times φk was assigned is denoted jk, so that j = PK k=1 jk. The parameter β < 1 controls how heavy the tail of the distribution is: when it approaches 1, a new value is assigned to every draw, when β = 0 the PY process reduces to DP. The expected value of K scales as O(αnβ) with the number of draws n, while it scales only logarithmically for DP processes. PY processes are expected to be more appropriate for many NLP problems, as they model power-law type distributions common for natural language (Teh, 2006). Hierarchical Dirichlet Processes (HDP) or hierarchical PY processes are used if the goal is to draw several related probability measures for the same set S. For example, they can be used to generate transition distributions of a Markov model, HDPHMM (Teh et al., 2006; Beal et al., 2002). For such a HMM, the top-level state proportions are drawn from the top-level stick breaking construction γ ∼GEM(α, β), and then the individual transition distributions for every state z = 1, 2, . . . φz are drawn from PY (γ, α′, β′). The parameters α′ and β′ control how similar the individual transition distributions φz are to the top-level state proportions γ, or, equivalently, how similar the transition distributions are to each other. 5 A Model for Semantic Parsing Our model of semantics associates with each semantic class a set of distributions which govern the generation of corresponding syntactic realizations2 and the selection of semantic classes for its arguments. Each sentence is generated starting from the root of its dependency tree, recursively drawing a semantic class, its syntactic realization, arguments and semantic classes for the arguments. Below we describe the model by first defining the set of the model parameters and then explaining the generation of in2Syntactic realizations are syntactic tree fragments, and therefore they correspond both to syntactic and lexical variations. dividual sentences. The generative story is formally presented in Figure 2. We associate with each semantic class c, c = 1, 2, . . . , a distribution of its syntactic realizations φc. For example, for the frame WinPrize illustrated in Figure 1 this distribution would concentrate at syntactic fragments corresponding to lexical items “defeated”, “secured the victory” and “won”. The distribution is drawn from DP(w(C), H(C)), where H(C) is a base measure over syntactic subtrees. We use a simple generative process to define the probability of a subtree, the underlying model is similar to the base measures used in the Bayesian tree-substitution grammars (Cohn et al., 2009). We start by generating a word w uniformly from the treebank distribution, then we decide on the number of dependents of w using the geometric distribution Geom(q(C)). For every dependent we generate a dependency relation r and a lexical form w′ from P(r|w)P(w′|r), where probabilities P are based on add-0.1 smoothed treebank counts. The process is continued recursively. The smaller the parameter q(C), the lower is the probability assigned to larger sub-trees. Parameters ψc,t and ψ+ c,t, t = 1, . . . , T, define a distribution over vectors (m1, m2, . . . , mT ) where mt is the number of times an argument of type t appears for a given semantic frame occurrence3. For the frame WinPrize these parameters would enforce that there exists exactly one Winner and exactly one Opponent for each occurrence of WinPrize. The parameter ψc,t defines the probability of having at least one argument of type t. If 0 is drawn from ψc,t then mt = 0, otherwise the number of additional arguments of type t (mt −1) is drawn from the geometric distribution Geom(ψ+ c,t). This generative story is flexible enough to accommodate both argument types which appear at most once per semantic class occurrence (e.g., agents), and argument types which frequently appear multiple times per semantic class occurrence (e.g., arguments corresponding to descriptors). Parameters φc,t, t = 1, . . . , T, define the dis3For simplicity, we assume that each semantic class has T associated argument types, note that this is not a restrictive assumption as some of the argument types can remain unused, and T can be selected to be sufficiently large to accommodate all important arguments. 1449 Parameters: γ ∼GEM(α0, β0) [top-level proportions of classes] θroot ∼PY (αroot, βroot, γ) [distrib of sem classes at root] for each sem class c = 1, 2, . . . : φc ∼DP(w(C), H(C)) [distribs of synt realizations] for each arg type t = 1, 2, . . . T: ψc,t ∼Beta(η0, η1) [first argument generation] ψ+ c,t ∼Beta(η+ 0 , η+ 1 ) [geom distr for more args] φc,t ∼DP(w(A), H(A)) [distribs of synt paths] θc,t ∼PY (α, β, γ) [distrib of arg fillers] Data Generation: for each sentence: croot ∼θroot [choose sem class for root] GenSemClass(croot) GenSemClass(c): s ∼φc [draw synt realization] for each arg type t = 1, . . . , T: if [n ∼ψc,t] = 1: [at least one arg appears] GenArgument(c, t) [draw one arg] while [n ∼ψ+ c,t] = 1: [continue generation] GenArgument(c, t) [draw more args] GenArgument(c, t): ac,t ∼φc,t [draw synt relation] c′ c,t ∼θc,t [draw sem class for arg] GenSemClass(c′ c,t) [recurse] Figure 2: The generative story for the Bayesian model for unsupervised semantic parsing. tributions over syntactic paths for the argument type t. In our example, for argument type Opponent, this distribution would associate most of the probability mass with relations pp over, dobj and pp against. These distributions are drawn from DP(w(A), H(A)). In this paper we only consider paths consisting of a single relation, therefore the base probability distribution H(A) is just normalized frequencies of dependency relations in the treebank. The crucial part of the model are the selectionpreference parameters θc,t, the distributions of semantic classes c′ for each argument type t of class c. For arguments Winner and Opponent of the frame WinPrize these distributions would assign most of the probability mass to semantic classes denoting teams or players. Distributions θc,t are drawn from a hierarchical PY process: first, top-level proportions of classes γ are drawn from GEM(α0, β0), and then the individual distributions θc,t over c′ are chosen from PY (α, β, γ). For each sentence, we first generate a class corresponding to the root of the dependency tree from the root-specific distribution of semantic classes θroot. Then we recursively generate classes for the entire sentence. For a class c, we generate the syntactic realization s and for each of the T types, decide how many arguments of that type to generate (see GenSemClass in Figure 2). Then we generate each of the arguments (see GenArgument) by first generating a syntactic arc ac,t, choosing a class as its filler c′ c,t and, finally, recursing. 6 Inference In our model, latent states, modeled with hierarchical PY processes, correspond to distinct semantic classes and, therefore, their number is expected to be very large for any reasonable model of semantics. As a result, many standard inference techniques, such as Gibbs sampling, or the structured mean-field method are unlikely to result in tractable inference. One of the standard and most efficient samplers for non-hierarchical PY processes are split-merge MH samplers (Jain and Neal, 2000; Dahl, 2003). In this section we explain how split-merge samplers can be applied to our model. 6.1 Split and Merge Moves On each move, split-merge samplers decide either to merge two states into one (in our case, merge two semantic classes), or split one state into two. These moves can be computed efficiently for our model of semantics. Note that for any reasonable model of semantics only a small subset of the entire set of semantic classes can be used as an argument for some fixed semantic class due to selectional preferences exhibited by predicates. For instance, only teams or players can fill arguments of the frame WinPrize in our running example. As a result, only a small number of terms in the joint distribution has to be evaluated on every move we may consider. When estimating the model, we start with assigning each distinct word (or, more precisely, a tuple of a word’s stem and its part-of-speech tag) to an individual semantic class. Then, we would iterate by selecting a random pair of class occurrences, and decide, at random, whether we attempt to perform a split-merge move or a compose-decompose move. 1450 6.2 Compose and Decompose Moves The compose-decompose operations modify syntactic fragments assigned to semantic classes, composing two neighboring dependency sub-trees or decomposing a dependency sub-tree. If the two randomly-selected syntactic fragments s and s′ correspond to different classes, c and c′, we attempt to compose them into ˆs and create a new semantic class ˆc. All occurrences of ˆs are assigned to this new class ˆc. For example, if two randomly-selected occurrences have syntactic realizations “secure” and “victory” they can be composed to obtain the syntactic fragment “secure dobj −−→victory”. This fragment will be assigned to a new semantic class which can later be merged with other classes, such as the ones containing syntactic realizations “defeat” or “win”. Conversely, if both randomly-selected syntactic fragments are already composed in the corresponding class, we attempt to split them. 6.3 Role-Syntax Alignment Move Merge, compose and decompose moves require recomputation of mapping between argument types (semantic roles) and syntactic fragments. Computing the best statistical mapping is infeasible and proposing a random mapping will result in many attempted moves being rejected. Instead we use a greedy randomized search method called Gibbs scan (Dahl, 2003). Though it is a part of the above 3 moves, this alignment move is also used on its own to induce semantic arguments for classes (frames) with a single syntactic realization. The Gibbs scan procedure is also used during the split move to select one of the newly introduced classes for each considered syntactic fragment. 6.4 Informed Proposals Since the number of classes is very large, selecting examples at random would result in a relatively low proportion of moves getting accepted, and, consequently, in a slow-mixing Markov chain. Instead of selecting both class occurrences uniformly, we select the first occurrence from a uniform distribution and then use a simple but effective proposal distribution for selecting the second class occurrence. Let us denote the class corresponding to the first occurrence as c1 and its syntactic realization as s1 with a head word w1. We begin by selecting uniformly randomly whether to attempt a composedecompose or a split-merge move. If we chose a compose-decompose move, we look for words (children) which can be attached below the syntactic fragment s1. We use the normalized counts of these words conditioned on the parent s1 to select the second word w2. We then select a random occurrence of w2; if it is a part of syntactic realization of c1 then a decompose move is attempted. Otherwise, we try to compose the corresponding clusters together. If we selected a split-merge move, we use a distribution based on the cosine similarity of lexical contexts of the words. The context is represented as a vector of counts of all pairs of the form (head word, dependency type) and (dependent, dependency type). So, instead of selecting a word occurrence uniformly, each occurrence of every word w2 is weighted by its similarity to w1, where the similarity is based on the cosine distance. As the moves are dependent only on syntactic representations, all the proposal distributions can be computed once at the initialization stage.4 7 Empirical Evaluation We induced a semantic representation over a collection of texts and evaluated it by answering questions about the knowledge contained in the corpus. We used the GENIA corpus (Kim et al., 2003), a dataset of 1999 biomedical abstracts, and a set of questions produced by (Poon and Domingos, 2009). A example question is shown in Figure 3. All model hyperpriors were set to maximize the posterior, except for w(A) and w(C), which were set to 1.e−10 and 1.e−35, respectively. Inference was run for around 300,000 sampling iterations until the percentage of accepted split-merge moves became lower than 0.05%. Let us examine some of the induced semantic classes (Table 1) before turning to the question answering task. Almost all of the clustered syntactic 4In order to minimize memory usage, we used frequency cut-off of 10. For split-merge moves, we select words based on the cosine distance if the distance is below 0.95 and sample the remaining words uniformly. This also reduces the required memory usage. 1451 Class Variations 1 motif, sequence, regulatory element, response element, element, dna sequence 2 donor, individual, subject 3 important, essential, critical 4 dose, concentration 5 activation, transcriptional activation, transactivation 6 b cell, t lymphocyte, thymocyte, b lymphocyte, t cell, t-cell line, human lymphocyte, t-lymphocyte 7 indicate, reveal, document, suggest, demonstrate 8 augment, abolish, inhibit, convert, cause, abrogate, modulate, block, decrease, reduce, diminish, suppress, up-regulate, impair, reverse, enhance 9 confirm, assess, examine, study, evaluate, test, resolve, determine, investigate 10 nf-kappab, nf-kappa b, nfkappab, nf-kb 11 antiserum, antibody, monoclonal antibody, ab, antisera, mab 12 tnfalpha, tnf-alpha, il-6, tnf Table 1: Examples of the induced semantic classes. realizations have a clear semantic connection. Cluster 6, for example, clusters lymphocytes with the exception of thymocyte, a type of cell which generates T cells. Cluster 8 contains verbs roughly corresponding to Cause change of position on a scale frame in FrameNet. Verbs in class 9 are used in the context of providing support for a finding or an action, and many of them are listed as evoking elements for the Evidence frame in FrameNet. Argument types of the induced classes also show a tendency to correspond to semantic roles. For example, an argument type of class 2 is modeled as a distribution over two argument parts, prep of and prep from. The corresponding arguments define the origin of the cells (transgenic mouse, smoker, volunteer, donor, . . .). We now turn to the QA task and compare our model (USP-BAYES) with the results of baselines considered in (Poon and Domingos, 2009). The first set of baselines looks for answers by attempting to match a verb and its argument in the question with the input text. The first version (KW) simply returns the rest of the sentence on the other side of the verb, while the second (KW-SYN) uses syntactic information to extract the subject or the object of the verb. Other baselines are based on state-of-the-art relation extraction systems. When the extracted relation and one of the arguments match those in a given Total Correct Accuracy KW 150 67 45% KW-SYN 87 67 77% TR-EXACT 29 23 79% TR-SUB 152 81 53% RS-EXACT 53 24 45% RS-SUB 196 81 41% DIRT 159 94 59% USP-MLN 334 295 88% USP-BAYES 325 259 80% Table 2: Performance on the QA task. question, the second argument is returned as an answer. The systems include TextRunner (TR) (Banko et al., 2007), RESOLVER (RS) (Yates and Etzioni, 2009) and DIRT (Lin and Pantel, 2001). The EXACT versions of the methods return answers when they match the question argument exactly, and the SUB versions produce answers containing the question argument as a substring. Similarly to the MLN system (USP-MLN), we generate answers as follows. We use our trained model to parse a question, i.e. recursively decompose it into lexical items and assign them to semantic classes induced at training. Using this semantic representation, we look for the type of an argument missing in the question, which, if found, is reported as an answer. It is clear that overly coarse clusters of argument fillers or clustering of semantically related but not equivalent relations can hurt precision for this evaluation method. Each system is evaluated by counting the answers it generates, and computing the accuracy of those answers.5 Table 2 summarizes the results. First, both USP models significantly outperform all other baselines: even though the accuracy of KW-SYN and TR-EXACT are comparable with our accuracy, the number of correct answers returned by USPBayes is 4 and 11 times smaller than those of KWSYN and TR-EXACT, respectively. While we are not beating the MLN baseline, the difference is not significant. The effective number of questions is relatively small (less than 80 different questions are answered by any of the models). More than 50% of USP-BAYES mistakes were due to wrong interpretation of only 5 different questions. From another point of view, most of the mistakes are explained 5The true recall is not known, as computing it would require exhaustive annotation of the entire corpus. 1452 Question: What does cyclosporin A suppress? Answer: expression of EGR-2 Sentence: As with EGR-3 , expression of EGR-2 was blocked by cyclosporin A . Question: What inhibits tnf-alpha? Answer: IL -10 Sentence: Our previous studies in human monocytes have demonstrated that interleukin ( IL ) -10 inhibits lipopolysaccharide ( LPS ) -stimulated production of inflammatory cytokines , IL-1 beta , IL-6 , IL-8 , and tumor necrosis factor ( TNF ) -alpha by blocking gene transcription . Figure 3: An example of questions, answers by our model and the corresponding sentences from the dataset. by overly coarse clustering corresponding to just 3 classes, namely, 30%, 25% and 20% of errors are due to the clusters 6, 8 and 12 (Figure 1), respectively. Though all these clusters have clear semantic interpretation (white blood cells, predicates corresponding to changes and cykotines associated with cancer progression, respectively), they appear to be too coarse for the QA method we use in our experiments. Though it is likely that tuning and different heuristics may result in better scores, we chose not to perform excessive tuning, as the evaluation dataset is fairly small. 8 Related Work There is a growing body of work on statistical learning for different versions of the semantic parsing problem (e.g., (Gildea and Jurafsky, 2002; Zettlemoyer and Collins, 2005; Ge and Mooney, 2005; Mooney, 2007)), however, most of these methods rely on human annotation, or some weaker forms of supervision (Kate and Mooney, 2007; Liang et al., 2009; Titov and Kozhevnikov, 2010; Clarke et al., 2010) and very little research has considered the unsupervised setting. In addition to the MLN model (Poon and Domingos, 2009), another unsupervised method has been proposed in (Goldwasser et al., 2011). In that work, the task is to predict a logical formula, and the only supervision used is a lexicon providing a small number of examples for every logical symbol. A form of self-training is then used to bootstrap the model. Unsupervised semantic role labeling with a generative model has also been considered (Grenager and Manning, 2006), however, they do not attempt to discover frames and deal only with isolated predicates. Another generative model for SRL has been proposed in (Thompson et al., 2003), but the parameters were estimated from fully annotated data. The unsupervised setting has also been considering for the related problem of learning narrative schemas (Chambers and Jurafsky, 2009). However, their approach is quite different from our Bayesian model as it relies on similarity functions. Though in this work we focus solely on the unsupervised setting, there has been some successful work on semi-supervised semantic-role labeling, including the Framenet version of the problem (F¨urstenau and Lapata, 2009). Their method exploits graph alignments between labeled and unlabeled examples, and, therefore, crucially relies on the availability of labeled examples. 9 Conclusions and Future Work In this work, we introduced a non-parametric Bayesian model for the semantic parsing problem based on the hierarchical Pitman-Yor process. The model defines a generative story for recursive generation of lexical items, syntactic and semantic structures. We extend the split-merge MH sampling algorithm to include composition-decomposition moves, and exploit the properties of our task to make it efficient in the hierarchical setting we consider. We plan to explore at least two directions in our future work. First, we would like to relax some of unrealistic assumptions made in our model: for example, proper modeling of alterations requires joint generation of syntactic realizations for predicateargument relations (Grenager and Manning, 2006; Lang and Lapata, 2010), similarly, proper modeling of nominalization implies support of arguments not immediately local in the syntactic structure. The second general direction is the use of the unsupervised methods we propose to expand the coverage of existing semantic resources, which typically require substantial human effort to produce. Acknowledgements The authors acknowledge the support of the MMCI Cluster of Excellence, and thank Chris Callison-Burch, Alexis Palmer, Caroline Sporleder, Ben Van Durme and the anonymous reviewers for their helpful comments and suggestions. 1453 References O. Abend, R. Reichart, and A. Rappoport. 2009. Unsupervised argument identification for semantic role labeling. In Proceedings of ACL-IJCNLP, pages 28–36, Singapore. Michele Banko, Michael J Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), pages 2670–2676. Matthew J. Beal, Zoubin Ghahramani, and Carl E. Rasmussen. 2002. The infinite hidden markov model. In Machine Learning, pages 29–245. MIT Press. David Blackwell and James B. MacQueen. 1973. Ferguson distributions via polya urn schemes. The Annals of Statistics, 1(2):353–355. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling. In Proceedings of the 9th Conference on Natural Language Learning, CoNLL-2005, Ann Arbor, MI USA. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proc. of the Annual Meeting of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACL-IJCNLP). James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world’s response. In Proc. of the Conference on Computational Natural Language Learning (CoNLL). Trevor Cohn, Sharon Goldwater, and Phil Blunsom. 2009. Inducing compact but accurate tree-substitution grammars. In HLT-NAACL, pages 548–556. David B. Dahl. 2003. An improved merge-split sampler for conjugate dirichlet process mixture models. Technical Report 1086, Department of Statistics, University of Wiscosin - Madison, November. Jacob Eisenstein, James Clarke, Dan Goldwasser, and Dan Roth. 2009. Reading to learn: Constructing features from semantic abstracts. In Proceedings of EMNLP. Thomas S. Ferguson. 1973. A bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209–230. C. J. Fillmore, C. R. Johnson, and M. R. L. Petruck. 2003. Background to framenet. International Journal of Lexicography, 16:235–250. Hagen F¨urstenau and Mirella Lapata. 2009. Graph alignment for semi-supervised semantic role labeling. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of the Ninth Conference on Computational Natural Language Learning (CONLL-05), Ann Arbor, Michigan. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labelling of semantic roles. Computational Linguistics, 28(3):245–288. Dan Goldwasser, Roi Reichart, James Clarke, and Dan Roth. 2011. Confidence driven unsupervised semantic parsing. In Proc. of the Meeting of Association for Computational Linguistics (ACL), Portland, OR, USA. Trond Grenager and Christoph Manning. 2006. Unsupervised discovery of a statistical verb lexicon. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Sonia Jain and Radford Neal. 2000. A split-merge markov chain monte carlo procedure for the dirichlet process mixture model. Journal of Computational and Graphical Statistics, 13:158–182. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian inference for PCFGs via Markov chain Monte Carlo. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics, Rochester, USA. Rohit J. Kate and Raymond J. Mooney. 2007. Learning language semantics from ambigous supervision. In Association for the Advancement of Artificial Intelligence (AAAI), pages 895–900. Jin-Dong Kim, Tomoko Ohta, Yuka Tateisi, and Jun’ichi Tsujii. 2003. Genia corpus—a semantically annotated corpus for bio-textmining. Bioinformatics, 19:i180– i182. Joel Lang and Mirella Lapata. 2010. Unsupervised induction of semantic roles. In Proceedings of the 48rd Annual Meeting of the Association for Computational Linguistics (ACL), Uppsala, Sweden. Percy Liang, Slav Petrov, Michael Jordan, and Dan Klein. 2007. The infinite PCFG using hierarchical dirichlet processes. In Joint Conf. on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 688–697, Prague, Czech Republic. Percy Liang, Michael I. Jordan, and Dan Klein. 2009. Learning semantic correspondences with less supervision. In Proc. of the Annual Meeting of the Association for Computational Linguistics and International Joint Conference on Natural Language Processing (ACLIJCNLP). Dekang Lin and Patrick Pantel. 2001. Dirt – discovery of inference rules from text. In Proc. of International Conference on Knowledge Discovery and Data Mining, pages 323–328. 1454 Raymond J. Mooney. 2007. Learning for semantic parsing. In Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Text Processing, pages 982–991. Alexis Palmer and Caroline Sporleder. 2010. Evaluating framenet-style semantic parsing: the role of coverage gaps in framenet. In Proceedings of the Conference on Computational Linguistics (COLING-2000), Beijing. Jim Pitman. 2002. Poisson-dirichlet and gem invariant distributions for split-and-merge transformations of an interval partition. Combinatorics, Probability and Computing, 11:501–514. Hoifung Poon and Pedro Domingos. 2009. Unsupervised semantic parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, (EMNLP-09). Matt Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62:107–136. Alan Ritter, Mausam, and Oren Etzioni. 2010. A latent dirichlet allocation method for selectional preferences. In Proceedings of the 48rd Annual Meeting of the Association for Computational Linguistics (ACL), Uppsala, Sweden. Diarmuid ´O S´eaghdha. 2010. Latent variable models of selectional preference. In Proceedings of the 48rd Annual Meeting of the Association for Computational Linguistics (ACL), Uppsala, Sweden. R. Swier and S. Stevenson. 2004. Unsupervised semantic role labelling. In Proceedings of EMNLP, pages 95–102, Barcelona, Spain. Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. 2006. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Y. W. Teh. 2006. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 985– 992. Cynthia A. Thompson, Roger Levy, and Christopher D. Manning. 2003. A generative model for semantic role labeling. In In Senseval-3, pages 397–408. Ivan Titov and Mikhail Kozhevnikov. 2010. Bootstrapping semantic analyzers from non-contradictory texts. In Proceedings of the 48rd Annual Meeting of the Association for Computational Linguistics (ACL), Uppsala, Sweden. Alexander Yates and Oren Etzioni. 2009. Unsupervised methods for determining object and relation synonyms on the web. Journal of Artificial Intelligence Research, 34:255–296. B. Zapirain, E. Agirre, L. L. M`arquez, and M. Surdeanu. 2010. Improving semantic role classification with selectional prefrences. In Proceedings of the Meeting of the North American chapter of the Association for Computational Linguistics (NAACL 2010), Los Angeles. Luke Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammar. In Proceedings of the Twenty-first Conference on Uncertainty in Artificial Intelligence, Edinburgh, UK, August. 1455
2011
145
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1456–1465, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Unsupervised Learning of Semantic Relation Composition Eduardo Blanco and Dan Moldovan Human Language Technology Research Institute The University of Texas at Dallas Richardson, TX 75080 USA {eduardo,moldovan}@hlt.utdallas.edu Abstract This paper presents an unsupervised method for deriving inference axioms by composing semantic relations. The method is independent of any particular relation inventory. It relies on describing semantic relations using primitives and manipulating these primitives according to an algebra. The method was tested using a set of eight semantic relations yielding 78 inference axioms which were evaluated over PropBank. 1 Introduction Capturing the meaning of text is a long term goal within the NLP community. Whereas during the last decade the field has seen syntactic parsers mature and achieve high performance, the progress in semantics has been more modest. Previous research has mostly focused on relations between particular kind of arguments, e.g., semantic roles, noun compounds. Notwithstanding their significance, they target a fairly narrow text semantics compared to the broad semantics encoded in text. Consider the sentence in Figure 1. Semantic role labelers exclusively detect the relations indicated with solid arrows, which correspond to the sentence syntactic dependencies. On top of those roles, there are at least three more relations (discontinuous arrows) that encode semantics other than the verbargument relations. In this paper, we venture beyond semantic relation extraction from text and investigate techniques to compose them. We explore the idea of inferring S NP VP A man . . . AGT V PP NP VP came AGT before the . . . LOC LOC yesterday TMP TMP to talk . . . PRP Figure 1: Semantic representation of A man from the Bush administration came before the House Agricultural Committee yesterday to talk about ... (wsj 0134, 0). a new relation linking the ends of a chain of relations. This scheme, informally used previously for combining HYPERNYM with other relations, has not been studied for arbitrary pairs of relations. For example, it seems adequate to state the following: if x is PART-OF y and y is HYPERNYM of z, then x is PART-OF z. An inference using this rule can be obtained instantiating x, y and z with engine, car and convertible. Going a step further, we consider nonobvious inferences involving AGENT, PURPOSE and other semantic relations. The novelties of this paper are twofold. First, an extended definition for semantic relations is proposed, including (1) semantic restrictions for their domains and ranges, and (2) semantic primitives. Second, an algorithm for obtaining inference axioms is described. Axioms take as their premises chains of two relations and output a new relation linking the ends of the chain. This adds an extra layer of semantics on top of previously extracted re1456 Primitive Description Inv. Ref. 1: Composable Relation can be meaningfully composed with other relations due to their fundamental characteristics id. [3] 2: Functional x is in a specific spatial or temporal position with respect to y in order for the connection to exist id. [1] 3: Homeomerous x must be the same kind of thing as y id. [1] 4: Separable x can be temporally or spatially separated from y; they can exist independently id. [1] 5: Temporal x temporally precedes y op. [2] 6: Connected x is physically or temporally connected to y; connection might be indirect. id. [3] 7: Intrinsic Relation is an attribute of the essence/stufflike nature of x and y id. [3] 8: Volitional Relation requires volition between the arguments id. 9: Universal Relation is always true between x and y id. 10: Fully Implicational The existence of x implies the existence of y op. 11: Weakly Implicational The existence of x sometimes implies the existence of y op. Table 1: List of semantic primitives. In the fourth column, [1] stands for (Winston et al., 1987), [2] for (Cohen and Losielle, 1988) and [3] for (Huhns and Stephens, 1989). lations. The conclusion of an axiom is identified using an algebra for composing semantic primitives. We name this framework Composition of Semantic Relations (CSR). The extended definition, set of primitives, algebra to compose primitives and CSR algorithm are independent of any particular set of relations. We first presented CSR and used it over PropBank in (Blanco and Moldovan, 2011). In this paper, we extend that work using a different set of primitives and relations. Seventy eight inference axioms are obtained and an empirical evaluation shows that inferred relations have high accuracies. 2 Semantic Relations Semantic relations are underlying relations between concepts. In general, they are defined by a textual definition accompanied by a few examples. For example, Chklovski and Pantel (2004) loosely define ENABLEMENT as a relation that holds between two verbs V1 and V2 when the pair can be glossed as V1 is accomplished by V2 and gives two examples: assess::review and accomplish::complete. We find this widespread kind of definition weak and prone to confusion. Following (Helbig, 2005), we propose an extended definition for semantic relations, including semantic restrictions for its arguments. For example, AGENT(x, y) holds between an animate concrete object x and a situation y. Moreover, we propose to characterize relations by semantic primitives. Primitives indicate whether a property holds between the arguments of a relation, e.g., the primitive temporal indicates if the first argument must happen before the second. Besides having a better understanding of each relation, this extended definition allows us to identify possible and not possible combinations of relations, as well as to automatically determine the conclusion of composing a possible combination. Formally, for a relation R(x, y), the extended definitions specifies: (a) DOMAIN(R) and RANGE(R) (i.e., semantic restrictions for x and y); and (b) PR (i.e., values for the primitives). The inverse relation R−1 can be obtained by switching domain and range, and defining PR−1 as depicted in Table 1. 2.1 Semantic Primitives Semantic primitives capture deep characteristics of relations. They are independently determinable for each relation and specify a property between an element of the domain and an element of the range of the relation being described (Huhns and Stephens, 1989). Primitives are fundamental, they cannot be explained using other primitives. For each primitive, each relation takes a value from the set V = {+, −, 0}. ‘+’ indicates that the primitive holds, ‘−’ that it does not hold, and ‘0’ that it does not apply. Since a cause must precede its effect, we have P temporal CAUSE = +. Primitives complement the definition of a relation and completely characterize it. Coupled with domain and range restrictions, primitives allow us to automatically manipulate and reason over relations. 1457 1:Composable R2 R1 − 0 + − × 0 × 0 0 0 0 + × 0 + 2:Functional R2 R1 − 0 + − − 0 + 0 0 0 0 + + 0 + 3:Homeomerous R2 R1 − 0 + − − − − 0 − 0 0 + − 0 + 4:Separable R2 R1 − 0 + − − −− 0 − 0 + + − + + 5:Temporal R2 R1 − 0 + − − −× 0 − 0 + + × + + 6:Connected R2 R1 − 0 + − −− + 0 − 0 + + + + + 7:Intrinsic R2 R1 − 0 + − − 0 − 0 0 0 0 + − 0 + 8:Volitional R2 R1 − 0 + − − 0 + 0 0 0 + + + + + 9:Universal R2 R1 − 0 + − − 0 − 0 0 0 0 + − 0 + 10:F. Impl. R2 R1 −0 + − −0 × 0 0 0 0 + × 0 + 11:W. Impl. R2 R1 − 0 + − − − × 0 − 0 + + × + + Table 2: Algebra for composing semantic primitives. The set of primitives used in this paper (Table 1) is heavily based on previous work in Knowledge Bases (Huhns and Stephens, 1989), but we considered some new primitives. The new primitives are justified by the fact that we aim at composing relations capturing the semantics from natural language. Whatever the set of relations, it will describe the characteristics of events (who / what / where / when / why / how) and connections between them (e.g., CAUSE, CORRELATION). Time, space and volition also play an important role. The third column in Table 1 indicates the value of the primitive for the inverse relation: id. means it takes the same; op. the opposite. The opposite of −is +, the opposite of + is −, and the opposite of 0 is 0. 2.1.1 An Algebra for Composing Semantic Primitives The key to automatically obtain inference axioms is the ability to know the result of composing primitives. Given P i R1 and P i R2, i.e., the values of the ith primitive for R1 and R2, we define an algebra for P i R1 ◦P i R2, i.e., the result of composing them. Table 2 depicts the algebra for all primitives. An ‘×’ means that the composition is prohibited. Consider, for example, the Intrinsic primitive: if both relations are intrinsic (+), the composition is intrinsic (+); else if intrinsic does not apply to either relation (0), the primitive does not apply to the composition either (0); else the composition is not intrinsic (−). 3 Inference Axioms Semantic relations are composed using inference axioms. An axiom is defined by using the composiR1 ◦R2 R1−1 ◦R2 x R1 R3 y R2 z x R3 y R2 R1 z R2 ◦R1 R2 ◦R1−1 x R2 R3 y R1 z x R3 R2 y z R1 Table 3: The four unique possible axioms taking as premises R1 and R2. Conclusions are indicated by R3 and are not guaranteed to be the same for the four axioms. tion operator ‘◦’; it combines two relations called premises and yields a conclusion. We denote an axiom as R1(x, y) ◦R2(y, z) →R3(x, z), where R1 and R2 are the premises and R3 the conclusion. In order to instantiate an axiom, the premises must form a chain by having argument y in common. In general, for n relations there are n 2  pairs. For each pair, taking into account inverse relations, there are 16 possible combinations. Applying property Ri ◦Rj = (Rj−1 ◦Ri−1)−1, only 10 are unique: (a) 4 combine R1, R2 and their inverses (Table 3); (b) 3 combine R1 and R1−1; and (c) 3 combine R2 and R2−1. The most interesting axioms fall into category (a) and there are n 2  × 4 + 3n = 2 × n(n −1) + 3n = 2n2 + n potential axioms in this category. Depending on n, the number of potential axioms to consider can be significantly large. For n = 20, there are 820 axioms to explore and for n = 30, 1,830. Manual examination of those potential ax1458 Relation R Domain Range P 1 R P 2 R P 3 R P 4 R P 5 R P 6 R P 7 R P 8 R P 9 R P 10 R P 11 R a: CAU CAUSE si si + + + + + 0 + + b: INT INTENT si aco + + + + 0 c: PRP PURPOSE si, ao si, co, ao + + 0 d: AGT AGENT aco si + + + 0 + 0 0 e: MNR MANNER st, ao, ql si + + 0 + 0 0 f : AT-L AT-LOCATION o, si loc + + 0 0 + 0 0 0 g: AT-T AT-TIME o, si tmp + + 0 0 + 0 0 0 h: SYN SYNONYMY ent ent + + 0 0 0 + 0 + 0 0 Table 4: Extended definition for the set of relations. ioms would be time-consuming and prone to errors. We avoid this by using the extended definition and the algebra for composing primitives. 3.1 Necessary Conditions for Composing Semantic Relations There are two necessary conditions for composing R1 and R2: • They have to be compatible. A pair of relations is compatible if it is possible, from a theoretical point of view, to compose them. Formally, R1 and R2 are compatible iff RANGE(R1) ∩DOMAIN(R2) ̸= ∅. • A third relation R3 must match as conclusion, i.e., ∃R3 such that DOMAIN(R3) ∩ DOMAIN(R1) ̸= ∅ and RANGE(R3) ∩ RANGE(R2) ̸= ∅. Furthermore, PR3 must be consistent with PR1 ◦PR2. 3.2 CSR: An Algorithm for Composing Semantic Relations Consider any set of relations R defined using the extended definition. One can obtain inference axioms using the following algorithm: For (R1, R2) ∈R × R: For (Ri, Rj) ∈[(R1, R2), (R1 −1, R2), (R2, R1), (R2, R1 −1)]: 1. Domain and range compatibility If RANGE(Ri) ∩DOMAIN(Rj) = ∅, break 2. Conclusion match Repeat for R3 ∈possible conc(R, Ri, Rj): (a) If DOMAIN(R3) ∩DOMAIN(Ri) = ∅or RANGE(R3) ∩RANGE(Rj) = ∅, break (b) If consistent(PR3, PRi ◦PRj ), axioms += Ri(x, y) ◦Rj(y, z) →R3(x, z) Given R, R−1 can be automatically obtained (Section 2). Possible conc(R, Ri, Rj) returns the set R unless Ri (Rj) is universal (P 9 = +), in which case it returns Rj (Ri). Consistent(PR1, PR2) is a simple procedure that compares the values assigned to each primitive; two values are consistent unless they have different opposite values or any of them is ‘×’ (i.e., the composition is prohibited). 3.3 An Example: Agent and Purpose We present an example of applying the CSR algorithm by inspecting the potential axiom AGENT(x, y) ◦PURPOSE−1(y, z) →R3(x, z), where x is the agent of y, and action y has as its purpose z. A statement instantiating the premises is [Mary]x [came]y to [talk]z about the issue. Knowing AGENT(Mary, came) and PURPOSE−1(came, talk), our goal is to identify the links R3(Mary, talk), if any. We use the relations as defined in Table 4. First, we note that both AGENT and PURPOSE−1 are compatible (Step 1). Second, we must identify the possible conclusions R3 that fit as conclusions (Step 2). Given PAGENT and PPURPOSE−1, we obtain PAGENT ◦ PPURPOSE−1 using the algebra: PAGENT = {+,+,−,+, 0,−,−,+,−,0, 0} PPURPOSE−1 = {+,−,−,+,+,−,−,−,−,0,+} PAGENT ◦PPURPOSE−1 = {+,+,−,+,+,−,−,+,−,0,+} Out of all relations (Section 4), AGENT and INTENT−1 fit the conclusion match. First, their domains and ranges are compatible with the composition (Step 2a). Second, both PAGENT and PINTENT−1 are consistent with PAGENT ◦PPURPOSE−1 (Step 2b). Thus, we obtain the following axioms: AGENT(x, y) ◦PURPOSE−1(y, z) →AGENT(x, z) and AGENT(x, y) ◦PURPOSE−1(y, z) →INTENT−1(x, z). Instantiating the axioms over [Mary]x [came]y to [talk]z about the issue yields AGENT(Mary, talk) and INTENT−1(Mary, talk). Namely, the axioms 1459 R2 R2 R2 R1 a b c d e f g h R1 a b c d e f g h R1 a−1 b−1 c−1 d−1 e−1 f−1 g−1 h−1 a a : : - f g a a−1 : b b f g a−1 a : : d−1 a b f g b b−1 b−1 : : b−1,d−1 f g b−1 b : : b c : b c - e f g c c−1 b−1 : : e f g c−1 c : : : b,d−1 e−1 c d d - d d f g d d−1 f g d−1 d d b−1,d b,d d e - b e e f g e e−1 b,d e−1 e,e−1 f g e−1 e e b−1,d−1 e,e−1 e f f f−1 f−1 f−1 f−1 f−1 f−1 - f−1 f f g g g−1 g−1 g−1 g−1 g−1 g−1 - g−1 g g h a b c d e f g h h−1 a b c d e f g h,h−1 h a−1 b−1 c−1 d−1 e−1 f−1 g−1 h,h−1 Table 5: Inference axioms automatically obtained using the relations from Table 4. A letter indicates an axiom R1 ◦R2 →R3 by indicating R3. An empty cell indicates that R1 and R2 do not have compatible domains and ranges; ‘:’ that the composition is prohibited; and ‘-’ that a relation R3 such that PR3 is consistent with PR1 ◦PR2 could not be found. yield Mary is the agent of talking, and she has the intention of talking. These two relations are valid but most probably ignored by a role labeler since Mary is not an argument of talk. 4 Case Study In this Section, we apply the CSR algorithm over a set of eight well-known relations. It is out of the scope of this paper to explain in detail the semantics of each relation or their detection. Our goal is to obtain inference axioms and, taking for granted that annotation is available, evaluate their accuracy. The only requirement for the CSR algorithm is to define semantic relations using the extended definition (Table 4). To define domains and ranges, we use the ontology in Section 4.2. Values for the primitives are assigned manually. The meaning of each relations is as follows: • CAU(x, y) encodes a relation between two situations, where the existence of y is due to the previous existence of x, e.g., He [got]y a bad grade because he [didn’t submit]x the project. • INT(x, y) links an animate concrete object and the situations he wants to become true, e.g., [Mary]y would like to [grow]x bonsais. • PRP(x, y) holds between a concept y and its main goal x. Purposes can be defined for situations, e.g., [pruning]y allows new [growth]x; concrete objects, e.g., the [garage]y is used for [storage]x; or abstract objects, e.g., [language]y is used to [communicate]x. • AGT(x, y) links a situation y and its intentional doer x, e.g., [Mary]x [went]y to Paris. x is restricted to animate concrete objects. • MNR(x, y) holds between the mode, way, style or fashion x in which a situation y happened. x can be a state, e.g., [walking]y [holding]x hands; abstract objects, e.g., [die]y [with pain]x; or qualities, e.g. [fast]x [delivery]y. • AT-L(x, y) defines the spatial context y of an object or situation x, e.g., He [went]x [to Cancun]y, [The car]x is [in the garage]y. • AT-T(x, y) links an object or situation x, with its temporal information y, e.g., He [went]x [yesterday]y, [20th century]y [sculptures]x. • SYN(x, y) can be defined between any two entities and holds when both arguments are semantically equivalent, e.g., SYN(dozen, twelve). 4.1 Inference Axioms Automatically Obtained After applying the CSR algorithm over the relations in Table 4, we obtain 78 unique inference axioms (Table 5). Each sub table must be indexed with the first and second premises as row and column respectively. The table on the left summarizes axioms R1 ◦R2 →R3 and R2 ◦R1 →R3, the one in the middle axiom R1−1 ◦R2 →R3 and the one on the right axiom R2 ◦R1−1 →R3. The CSR algorithm identifies several correct axioms and accurately marks as prohibited several combinations that would lead to wrong inferences: • For CAUSE, the inherent transitivity is detected (a ◦a →a). Also, no relation is inferred between two different effects of the same cause (a−1 ◦a →:) and between two causes of the same effect (a ◦a−1 →:). • The location and temporal information of concept y is inherited by its cause, intention, purpose, agent and manner (sub table on the left, f and g columns). 1460 • As expected, axioms involving SYNONYMY as one of their premises yield the other premise as their conclusion (all sub tables). • The AGENT of y is inherited by its causes, purposes and manners (d row, sub table on the right). In all examples below, AGT(x, y) holds, and we infer AGT(x, z) after composing it with R2: (1) [He]x [went]y after [reading]z a good review, R2: CAU−1(y, z); (2) [They]x [went]y to [talk]z about it, R2: PRP−1(y, z); and (3) [They]x [were walking]y [holding]z hands, R2: MNR−1(y, z) An AGENT for a situation y is also inherited by its effects, and the situations that have y as their manner or purpose (d row, sub table on the left). • A concept intends the effects of its intentions and purposes (b−1 ◦a →b−1, c−1 ◦a → b−1). For example, [I]x printed the document to [read]y and [learn]z the contents; INT−1(I, read) ◦CAU(read, learn) →INT−1(I, learn). It is important to note that domain and range restrictions are not sufficient to identify inference axioms; they only filter out pairs of not compatible relations. The algebra to compose primitives is used to detect prohibited combinations of relations based on semantic grounds and identify the conclusion of composing them. Without primitives, the cells in Table 5 would be either empty (marking the pair as not compatible) or would simply indicate that the pair has compatible domain and range (without identifying the conclusion). Table 5 summarizes 136 unique pairs of premises (recall Ri ◦Rj = (Rj−1 ◦Ri−1)−1). Domain and range restrictions mark 39 (28.7%) as not compatible. The algebra labels 12 pairs as prohibited (8.8%, [12.4% of the compatible pairs]) and is unable to find a conclusion 14 times (10.3%, [14.4%]). Finally, conclusions are found for 71 pairs (52.2%, [73.2%]). Since more than one conclusion might be detected for the same pair of premises, 78 inference axioms are ultimately identified. 4.2 Ontology In order to define domains and ranges, we use a simplified version of the ontology presented in (Helbig, 2005). We find enough to contemplate only seven base classes: ev, st, co, aco, ao, loc and tmp. Entities (ent) refer to any concept and are divided into situations (si), objects (o) and descriptors (des). • Situations are anything that happens at a time and place and are divided into events (ev) and states (st). Events imply a change in the status of other entities (e.g., grow, conference); states do not (e.g., be standing, account for 10%). • Objects can be either concrete (co, palpable, tangible, e.g., table, keyboard) or abstract (ao, intangible, product of human reasoning, e.g., disease, weight). Concrete objects can be further classified as animate (aco) if they have life, vigor or spirit (e.g. John, cat). • Descriptors state properties about the local (loc, e.g., by the table, in the box) or temporal (tmp, e.g., yesterday, last month) context of an entity. This simplified ontology does not aim at defining domains and ranges for any relation set; it is a simplification to fit the eight relations we work with. 5 Evaluation An evaluation was performed to estimate the validity of the 78 axioms. Because the number of axioms is large we have focused on a subset of them (Table 6). The 31 axioms having SYN as premise are intuitively correct: since synonymous concepts are interchangeable, given veracious annotation they perform valid inferences. We use PropBank annotation (Palmer et al., 2005) to instantiate the premises of each axiom. First, all instantiations of axiom PRP ◦MNR−1 →MNR−1 were manually checked. This axiom yields 237 new MANNER, 189 of which are valid (Accuracy 0.80). Second, we evaluated axioms 1–7 (Table 6). Since PropBank is a large corpus, we restricted this phase to the first 1,000 sentences in which there is an instantiation of any axiom. These sentences contain 1,412 instantiations and are found in the first 31,450 sentences of PropBank. Table 6 depicts the total number of instantiations for each axiom and its accuracy (columns 3 and 4). Accuracies range from 0.40 to 0.90, showing that the plausibility of an axiom depends on the axiom. The average accuracy for axioms involving CAU is 0.54 and for axioms involving PRP is 0.87. Axiom CAU ◦AGT−1 →AGT−1 adds 201 relations, which corresponds to 0.89% in relative terms. Its accuracy is low, 0.40. Other axioms are less productive but have a greater relative impact and accu1461 no heuristic with heuristic No. Axiom No. Inst. Acc. Produc. No. Inst. Acc. Produc. 1 CAU ◦AGT−1 →AGT−1 201 0.40 0.89% 75 0.67 0.33% 2 CAU ◦AT-L →AT-L 17 0.82 0.84% 15 0.93 0.74% 3 CAU ◦AT-T →AT-T 72 0.85 1.25% 69 0.87 1.20% 1–3 CAU ◦R2 →R3 290 0.54 0.96% 159 0.78 0.52% 4 PRP ◦AGT−1 →AGT−1 375 0.89 1.66% 347 0.94 1.54% 5 PRP ◦AT-L →AT-L 49 0.90 2.42% 48 0.92 2.37% 6 PRP ◦AT-T →AT-T 138 0.84 2.40% 129 0.88 2.25% 7 PRP ◦MNR−1 →MNR−1 71 0.82 3.21% 70 0.83 3.16% 4–7 PRP ◦R2 →R3 633 0.87 1.95% 594 0.91 1.83% 1–7 All 923 0.77 2.84% 753 0.88 2.32% Table 6: Axioms used for evaluation, number of instances, accuracy and productivity (i.e., percentage of relations added on top the ones already present). Results are reported with and without the heuristic. . . . space officials AGT AGT in Tokyo in July for an exhibit CAU AT-T AT-L stopped by . . . AT-L AT-T Figure 2: Basic (solid arrows) and inferred relations (discontinuous) from A half-dozen Soviet space officials, in Tokyo in July for an exhibit, stopped by to see their counterparts at the National ... (wsj 0405, 1). racy. For example, axiom PRP ◦MNR−1 →MNR−1, only yields 71 new MNR, and yet it is adding 3.21% in relative terms with an accuracy of 0.82. Overall, applying the seven axioms adds 923 relations on top of the ones already present (2.84% in relative terms) with an accuracy of 0.77. Figure 2 shows examples of inferences using axioms 1–3. 5.1 Error Analysis Because of the low accuracy of axiom 1, an error analysis was performed. We found that unlike other axioms, this axiom often yield a relation type that is already present in the semantic representation. Specifically, it often yields R(x, z) when R(x’, z) is already known. We use the following heuristic in order to improve accuracy: do not instantiate an axiom R1(x, y) ◦R2(y, z) →R3(x, z) if a relation of the form R3(x’, z) is already known. This simple heuristic has increased the accuracy of the inferences at the cost of lowering their productivity. The last three columns in Table 6 show results when using the heuristic. 6 Comparison with Previous Work There have been many proposals to detect semantic relations from text without composition. Researches have targeted particular relations (e.g., CAUSE (Chang and Choi, 2006; Bethard and Martin, 2008)), relations within noun phrases (Nulty, 2007), named entities (Hirano et al., 2007) or clauses (Szpakowicz et al., 1995). Competitions include (Litkowski, 2004; Carreras and M`arquez, 2005; Girju et al., 2007; Hendrickx et al., 2009). Two recent efforts (Ruppenhofer et al., 2009; Gerber and Chai, 2010) are similar to CSR in their goal (i.e., extract meaning ignored by current semantic parsers), but completely differ in their means. Their merit relies on annotating and extracting semantic connections not originally contemplated (e.g., between concepts from two different sentences) using an already known and fixed relation set. Unlike CSR, they are dependent on the relation inventory, require annotation and do not reason or manipulate relations. In contrast to all the above references and the state of the art, the proposed framework obtains axioms that take as input semantic relations pro1462 duced by others and output more relations: it adds an extra layer of semantics previously ignored. Previous research has exploited the idea of using semantic primitives to define and classify semantic relations under the names of relation elements, deep structure, aspects and primitives. The first attempt on describing semantic relations using primitives was made by Chaffin and Herrmann (1987); they differentiate 31 relations using 30 relation elements clustered into five groups (intensional force, dimension, agreement, propositional and part-whole inclusion). Winston et al. (1987) introduce 3 relation elements (functional, homeomerous and separable) to distinguish six subtypes of PART-WHOLE. Cohen and Losielle (1988) use the notion of deep structure in contrast to the surface relation and utilizes two aspects (hierarchical and temporal). Huhns and Stephens (1989) consider a set of 10 primitives. In theoretical linguistics, Wierzbicka (1996) introduced the notion of semantic primes to perform linguistic analysis. Dowty (2006) studies compositionality and identifies entailments associated with certain predicates and arguments (Dowty, 2001). There has not been much work on composing relations in the field of computational linguistics. The term compositional semantics is used in conjunction with the principle of compositionality, i.e., the meaning of a complex expression is determined from the meanings of its parts, and the way in which those parts are combined. These approaches are usually formal and use a potentially infinite set of predicates to represent semantics. Ge and Mooney (2009) extracts semantic representations using syntactic structures while Copestake et al. (2001) develops algebras for semantic construction within grammars. Logic approaches include (Lakoff, 1970; S´anchez Valencia, 1991; MacCartney and Manning, 2009). Composition of Semantic Relations is complimentary to Compositional Semantics. Previous research has manually extracted plausible inference axioms for WordNet relations (Harabagiu and Moldovan, 1998) and transformed chains of relations into theoretical axioms (Helbig, 2005). The CSR algorithm proposed here automatically obtains inference axioms. Composing relations has been proposed before within knowledge bases. Cohen and Losielle (1988) combines a set of nine fairly specific relations (e.g., FOCUS-OF, PRODUCT-OF, SETTING-OF). The key to determine plausibility is the transitivity characteristic of the aspects: two relations shall not combine if they have contradictory values for any aspect. The first algebra to compose semantic primitives was proposed by Huhns and Stephens (1989). Their relations are not linguistically motivated and ten of them map to some sort of PART-WHOLE (e.g. PIECEOF, SUBREGION-OF). Unlike (Cohen and Losielle, 1988; Huhns and Stephens, 1989), we use typical relations that encode the semantics of natural language, propose a method to automatically obtain the inverse of a relation and empirically test the validity of the axioms obtained. 7 Conclusions Going beyond current research, in this paper we investigate the composition of semantic relations. The proposed CSR algorithm obtains inference axioms that take as their input semantic relations and output a relation previously ignored. Regardless of the set of relations and annotation scheme, an additional layer of semantics is created on top of the already existing relations. An extended definition for semantic relations is proposed, including restrictions on their domains and ranges as well as values for semantic primitives. Primitives indicate if a certain property holds between the arguments of a relation. An algebra for composing semantic primitives is defined, allowing to automatically determine the primitives values for the composition of any two relations. The CSR algorithm makes use of the extended definition and algebra to discover inference axioms in an unsupervised manner. Its usefulness is shown using a set of eight common relations, obtaining 78 axioms. Empirical evaluation shows the axioms add 2.32% of relations in relative terms with an overall accuracy of 0.88, more than what state-of-the-art semantic parsers achieve. The framework presented is completely independent of any particular set of relations. Even though different sets may call for different ontologies and primitives, we believe the model is generally applicable; the only requirement is to use the extended definition. This is a novel way of retrieving semantic relations in the field of computational linguistics. 1463 References Steven Bethard and James H. Martin. 2008. Learning Semantic Links from a Corpus of Parallel Temporal and Causal Relations. In Proceedings of ACL-08: HLT, Short Papers, pages 177–180, Columbus, Ohio. Eduardo Blanco and Dan Moldovan. 2011. A Model for Composing Semantic Relations. In Proceedings of the 9th International Conference on Computational Semantics (IWCS 2011), Oxford, UK. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: semantic role labeling. In CONLL ’05: Proceedings of the Ninth Conference on Computational Natural Language Learning, pages 152–164, Morristown, NJ, USA. Roger Chaffin and Douglass J. Herrmann, 1987. Relation Element Theory: A New Account of the Representation and Processing of Semantic Relations. Du S. Chang and Key S. Choi. 2006. Incremental cue phrase learning and bootstrapping method for causality extraction using cue phrase and word pair probabilities. Information Processing & Management, 42(3):662–678. Timothy Chklovski and Patrick Pantel. 2004. VerbOcean: Mining the Web for Fine-Grained Semantic Verb Relations. In Proceedings of EMNLP 2004, pages 33– 40, Barcelona, Spain. Paul R. Cohen and Cynthia L. Losielle. 1988. Beyond ISA: Structures for Plausible Inference in Semantic Networks. In Proceedings of the Seventh National conference on Artificial Intelligence, St. Paul, Minnesota. Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An Algebra for Semantic Construction in Constraint-based Grammars. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 140–147, Toulouse, France. David D. Dowty. 2001. The Semantic Asymmetry of ‘Argument Alternations’ (and Why it Matters). In Geart van der Meer and Alice G. B. ter Meulen, editors, Making Sense: From Lexeme to Discourse, volume 44. David Dowty. 2006. Compositionality as an Empirical Problem. In Chris Barker and Polly Jacobson, editors, Papers from the Brown University Conference on Direct Compositionality. Oxford University Press. Ruifang Ge and Raymond Mooney. 2009. Learning a Compositional Semantic Parser using an Existing Syntactic Parser. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 611–619, Suntec, Singapore. Matthew Gerber and Joyce Chai. 2010. Beyond NomBank: A Study of Implicit Arguments for Nominal Predicates. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1583–1592, Uppsala, Sweden. Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, and Deniz Yuret. 2007. SemEval-2007 Task 04: Classification of Semantic Relations between Nominals. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 13–18, Prague, Czech Republic. Sanda Harabagiu and Dan Moldovan. 1998. Knowledge Processing on an Extended WordNet. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database and Some of its Applications., chapter 17, pages 684–714. The MIT Press. Hermann Helbig. 2005. Knowledge Representation and the Semantics of Natural Language. Springer, 1st edition. Iris Hendrickx, Su N. Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations Between Pairs of Nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 94–99, Boulder, Colorado. Toru Hirano, Yoshihiro Matsuo, and Genichiro Kikui. 2007. Detecting Semantic Relations between Named Entities in Text Using Contextual Features. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Demo and Poster Sessions, pages 157–160, Prague, Czech Republic. Michael N. Huhns and Larry M. Stephens. 1989. Plausible Inferencing Using Extended Composition. In IJCAI’89: Proceedings of the 11th international joint conference on Artificial intelligence, pages 1420– 1425, San Francisco, CA, USA. George Lakoff. 1970. Linguistics and Natural Logic. 22(1):151–271, December. Ken Litkowski. 2004. Senseval-3 task: Automatic labeling of semantic roles. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 9–12, Barcelona, Spain. Bill MacCartney and Christopher D. Manning. 2009. An extended model of natural logic. In Proceedings of the Eight International Conference on Computational Semantics, pages 140–156, Tilburg, The Netherlands. Paul Nulty. 2007. Semantic Classification of Noun Phrases Using Web Counts and Learning Algorithms. In Proceedings of the ACL 2007 Student Research Workshop, pages 79–84, Prague, Czech Republic. 1464 Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Semantic Roles. Computational Linguistics, 31(1):71–106. Josef Ruppenhofer, Caroline Sporleder, Roser Morante, Collin Baker, and Martha Palmer. 2009. SemEval2010 Task 10: Linking Events and Their Participants in Discourse. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions (SEW-2009), pages 106–111, Boulder, Colorado. Victor S´anchez Valencia. 1991. Studies on Natural Logic and Categorial Grammar. Ph.D. thesis, University of Amsterdam. Barker Szpakowicz, Ken Barker, and Stan Szpakowicz. 1995. Interactive semantic analysis of Clause-Level Relationships. In Proceedings of the Second Conference of the Pacific Association for Computational Linguistics, pages 22–30. Anna Wierzbicka. 1996. Semantics: Primes and Universals. Oxford University Press, USA. Morton E. Winston, Roger Chaffin, and Douglas Herrmann. 1987. A Taxonomy of Part-Whole Relations. Cognitive Science, 11(4):417–444. 1465
2011
146
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1466–1475, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Unsupervised Discovery of Domain-Specific Knowledge from Text Dirk Hovy, Chunliang Zhang, Eduard Hovy Information Sciences Institute University of Southern California 4676 Admiralty Way, Marina del Rey, CA 90292 {dirkh, czheng, hovy}@isi.edu Anselmo Pe˜nas UNED NLP and IR Group Juan del Rosal 16 28040 Madrid, Spain [email protected] Abstract Learning by Reading (LbR) aims at enabling machines to acquire knowledge from and reason about textual input. This requires knowledge about the domain structure (such as entities, classes, and actions) in order to do inference. We present a method to infer this implicit knowledge from unlabeled text. Unlike previous approaches, we use automatically extracted classes with a probability distribution over entities to allow for context-sensitive labeling. From a corpus of 1.4m sentences, we learn about 250k simple propositions about American football in the form of predicateargument structures like “quarterbacks throw passes to receivers”. Using several statistical measures, we show that our model is able to generalize and explain the data statistically significantly better than various baseline approaches. Human subjects judged up to 96.6% of the resulting propositions to be sensible. The classes and probabilistic model can be used in textual enrichment to improve the performance of LbR end-to-end systems. 1 Introduction The goal of Learning by Reading (LbR) is to enable a computer to learn about a new domain and then to reason about it in order to perform such tasks as question answering, threat assessment, and explanation (Strassel et al., 2010). This requires joint efforts from Information Extraction, Knowledge Representation, and logical inference. All these steps depend on the system having access to basic, often unstated, foundational knowledge about the domain. Most documents, however, do not explicitly mention this information in the text, but assume basic background knowledge about the domain, such as positions (“quarterback”), titles (“winner”), or actions (“throw”) for sports game reports. Without this knowledge, the text will not make sense to the reader, despite being well-formed English. Luckily, the information is often implicitly contained in the document or can be inferred from similar texts. Our system automatically acquires domainspecific knowledge (classes and actions) from large amounts of unlabeled data, and trains a probabilistic model to determine and apply the most informative classes (quarterback, etc.) at appropriate levels of generality for unseen data. E.g., from sentences such as “Steve Young threw a pass to Michael Holt”, “Quarterback Steve Young finished strong”, and “Michael Holt, the receiver, left early” we can learn the classes quarterback and receiver, and the proposition “quarterbacks throw passes to receivers”. We will thus assume that the implicit knowledge comes in two forms: actions in the form of predicate-argument structures, and classes as part of the source data. Our task is to identify and extract both. Since LbR systems must quickly adapt and scale well to new domains, we need to be able to work with large amounts of data and minimal supervision. Our approach produces simple propositions about the domain (see Figure 1 for examples of actual propositions learned by our system). American football was the first official evaluation domain in the DARPA-sponsored Machine Reading program, and provides the background for a number 1466 of LbR systems (Mulkar-Mehta et al., 2010). Sports is particularly amenable, since it usually follows a finite, explicit set of rules. Due to its popularity, results are easy to evaluate with lay subjects, and game reports, databases, etc. provide a large amount of data. The same need for basic knowledge appears in all domains, though. In music, musicians play instruments, in electronics, components constitute circuits, circuits use electricity, etc. Teams beat teams Teams play teams Quarterbacks throw passes Teams win games Teams defeat teams Receivers catch passes Quarterbacks complete passes Quarterbacks throw passes to receivers Teams play games Teams lose games Figure 1: The ten most frequent propositions discovered by our system for the American football domain Our approach differs from verb-argument identification or Named Entity (NE) tagging in several respects. While previous work on verb-argument selection (Pardo et al., 2006; Fan et al., 2010) uses fixed sets of classes, we cannot know a priori how many and which classes we will encounter. We therefore provide a way to derive the appropriate classes automatically and include a probability distribution for each of them. Our approach is thus less restricted and can learn context-dependent, finegrained, domain-specific propositions. While a NEtagged corpus could produce a general proposition like “PERSON throws to PERSON”, our method enables us to distinguish the arguments and learn “quarterback throws to receiver” for American football and “outfielder throws to third base” for baseball. While in NE tagging each word has only one correct tag in a given context, we have hierarchical classes: an entity can be correctly labeled as a player or a quarterback (and possibly many more classes), depending on the context. By taking context into account, we are also able to label each sentence individually and account for unseen entities without using external resources. Our contributions are: • we use unsupervised learning to train a model that makes use of automatically extracted classes to uncover implicit knowledge in the form of predicate-argument propositions • we evaluate the explanatory power, generalization capability, and sensibility of the propositions using both statistical measures and human judges, and compare them to several baselines • we provide a model and a set of propositions that can be used to improve the performance of end-to-end LbR systems via textual enrichment. 2 Methods INPUT: Steve Young threw a pass to Michael Holt 1. PARSE INPUT: 2. JOIN NAMES, EXTRACT PREDICATES: NVN: Steve_Young throw pass NVNPN: Steve_Young throw pass to Michael_Holt 3. DECODE TO INFER PROPOSITIONS: QUARTERBACK throw pass QUARTERBACK throw pass to RECEIVER Steve/NNP Young/NNP throw/VBD pass/NN a/DT to/TO Michael/NNP Holt/NNP nsubj dobj prep nn nn pobj det Steve_Young threw a s1 s2 x1 p1 p2 quarterback throw p Figure 2: Illustrated example of different processing steps Our running example will be “Steve Young threw a pass to Michael Holt”. This is an instance of the underlying proposition “quarterbacks throw passes to receivers”, which is not explicitly stated in the data. A proposition is thus a more general statement about the domain than the sentences it derives. It contains domain-specific classes (quarterback, receiver), as well as lexical items (“throw”, “pass”). In order to reproduce the proposition, given the input sentences, our system has to not only identify the classes, but also learn when to 1467 abstract away from the lexical form to the appropriate class and when to keep it (cf. Figure 2, step 3). To facilitate extraction, we focus on propositions with the following predicate-argument structures: NOUN-VERB-NOUN (e.g., “quarterbacks throw passes”), or NOUN-VERB-NOUNPREPOSITION-NOUN (e.g., “quarterbacks throw passes to receivers”. There is nothing, though, that prevents the use of other types of structures as well. We do not restrict the verbs we consider (Pardo et al., 2006; Ritter et al., 2010)), which extracts a high number of hapax structures. Given a sentence, we want to find the most likely class for each word and thereby derive the most likely proposition. Similar to Pardo et al. (2006), we assume the observed data was produced by a process that generates the proposition and then transforms the classes into a sentence, possibly adding additional words. We model this as a Hidden Markov Model (HMM) with bigram transitions (see Section 2.3) and use the EM algorithm (Dempster et al., 1977) to train it on the observed data, with smoothing to prevent overfitting. 2.1 Data We use a corpus of about 33k texts on American football, extracted from the New York Times (Sandhaus, 2008). To identify the articles, we rely on the provided “football” keyword classifier. The resulting corpus comprises 1, 359, 709 sentences from game reports, background stories, and opinion pieces. In a first step, we parse all documents with the Stanford dependency parser (De Marneffe et al., 2006) (see Figure 2, step 1). The output is lemmatized (collapsing “throws”, “threw”, etc., into “throw”), and marked for various dependencies (nsubj, amod, etc.). This enables us to extract the predicate argument structure, like subjectverb-object, or additional prepositional phrases (see Figure 2, step 2). These structures help to simplify the model by discarding additional words like modifiers, determiners, etc., which are not essential to the proposition. The same approach is used by (Brody, 2007). We also concatenate multiword names (identified by sequences of NNPs) with an underscore to form a single token (“Steve/NNP Young/NNP” →“Steve Young”). 2.2 Deriving Classes To derive the classes used for entities, we do not restrict ourselves to a fixed set, but derive a domainspecific set directly from the data. This step is performed simultaneously with the corpus generation described above. We utilize three syntactic constructions to identify classes, namely nominal modifiers, copula verbs, and appositions, see below. This is similar in nature to Hearst’s lexico-syntactic patterns (Hearst, 1992) and other approaches that derive ISA relations from text. While we find it straightforward to collect classes for entities in this way, we did not find similar patterns for verbs. Given a suitable mechanism, however, these could be incorporated into our framework as well. Nominal modifier are common nouns (labeled NN) that precede proper nouns (labeled NNP), as in “quarterback/NN Steve/NNP Young/NNP”, where “quarterback” is the nominal modifier of “Steve Young”. Similar information can be gained from appositions (e.g., “Steve Young, the quarterback of his team, said...”), and copula verbs (“Steve Young is the quarterback of the 49ers”). We extract those cooccurrences and store the proper nouns as entities and the common nouns as their possible classes. For each pair of class and entity, we collect counts over the corpus to derive probability distributions. Entities for which we do not find any of the above patterns in our corpus are marked “UNK”. These entities are instantiated with the 20 most frequent classes. All other (non-entity) words (including verbs) have only their identity as class (i.e., “pass” remains “pass”). The average number of classes per entity is 6.87. The total number of distinct classes for entities is 63, 942. This is a huge number to model in our state space.1 Instead of manually choosing a subset of the classes we extracted, we defer the task of finding the best set to the model. We note, however, that the distribution of classes for each entity is highly skewed. Due to the unsupervised nature of the extraction process, many of the extracted classes are hapaxes and/or random noise. Most entities have only a small number of applicable classes (a football player usually has one main posi1NE taggers usually use a set of only a few dozen classes at most. 1468 tion, and a few additional roles, such as star, teammate, etc.). We handle this by limiting the number of classes considered to 3 per entity. This constraint reduces the total number of distinct classes to 26, 165, and the average number of classes per entity to 2.53. The reduction makes for a more tractable model size without losing too much information. The class alphabet is still several magnitudes larger than that for NE or POS tagging. Alternatively, one could use external resources such as Wikipedia, Yago (Suchanek et al., 2007), or WordNet++ (Ponzetto and Navigli, 2010) to select the most appropriate classes for each entity. This is likely to have a positive effect on the quality of the applicable classes and merits further research. Here, we focus on the possibilities of a self-contained system without recurrence to outside resources. The number of classes we consider for each entity also influences the number of possible propositions: if we consider exactly one class per entity, there will be little overlap between sentences, and thus no generalization possible—the model will produce many distinct propositions. If, on the other hand, we used only one class for all entities, there will be similarities between many sentences—the model will produce very few distinct propositions. 2.3 Probabilistic Model pass to Michael Holt pass pass to Michael_Holt ass ass to receiver w to Michael Holt prep nn pobj Steve_Young threw a pass to Michael_Holt s1 s2 x1 s3 s4 s5 p1 p2 p3 p4 p5 quarterback throw pass to receiver Figure 3: Graphical model for the running example We use a generative noisy-channel model to capture the joint probability of input sentences and their underlying proposition. Our generative story of how a sentence s (with words s1, ..., sn) was generated assumes that a proposition p is generated as a sequence of classes p1, ..., pn, with transition probabilities P(pi|pi−1). Each class pi generates a word si with probability P(si|pi). We allow additional words x in the sentence which do not depend on any class in the proposition and are thus generated independently with P(x) (cf. model in Figure 3). Since we observe the co-occurrence counts of classes and entities in the data, we can fix the emission parameter P(s|p) in our HMM. Further, we do not want to generate sentences from propositions, so we can omit the step that adds the additional words x in our model. The removal of these words is reflected by the preprocessing step that extracts the structure (cf. Section 2.1). Our model is thus defined as P(s, p) =P(p1) · n Y i=1  P(pi|pi−1) · P(si|pi)  (1) where si, pi denote the ith word of sentence s and proposition p, respectively. 3 Evaluation We want to evaluate how well our model predicts the data, and how sensible the resulting propositions are. We define a good model as one that generalizes well and produces semantically useful propositions. We encounter two problems. First, since we derive the classes in a data-driven way, we have no gold standard data available for comparison. Second, there is no accepted evaluation measure for this kind of task. Ultimately, we would like to evaluate our model externally, such as measuring its impact on performance of a LbR system. In the absence thereof, we resort to several complementary measures, as well as performing an annotation task. We derive evaluation criteria as follows. A model generalizes well if it can cover (‘explain’) all the sentences in the corpus with a few propositions. This requires a measure of generality. However, while a proposition such as “PERSON does THING”, has excellent generality, it possesses no discriminating power. We also need the propositions to partition the sentences into clusters of semantic similarity, to support effective inference. This requires a measure of distribution. Maximal distribution, achieved by assigning every sentence to a different proposition, however, is not useful either. We need to find an appropriate level of generality within which the sentences are clustered into propositions for the best overall groupings to support inference. To assess the learned model, we apply the measures of generalization, entropy, and perplexity (see 1469 Sections 3.2, 3.3, and 3.4). These measures can be used to compare different systems. We do not attempt to weight or combine the different measures, but present each in its own right. Further, to assess label accuracy, we use Amazon’s Mechanical Turk annotators to judge the sensibility of the propositions produced by each system (Section 3.5). We reason that if our system learned to infer the correct classes, then the resulting propositions should constitute true, general statements about that domain, and thus be judged as sensible.2 This approach allows the effective annotation of sufficient amounts of data for an evaluation (first described for NLP in (Snow et al., 2008)). 3.1 Evaluation Data With the trained model, we use Viterbi decoding to extract the best class sequence for each example in the data. This translates the original corpus sentences into propositions. See steps 2 and 3 in Figure 2. We create two baseline systems from the same corpus, one which uses the most frequent class (MFC) for each entity, and another one which uses a class picked at random from the applicable classes of each entity. Ultimately, we are interested in labeling unseen data from the same domain with the correct class, so we evaluate separately on the full corpus and the subset of sentences that contain unknown entities (i.e., entities for which no class information was available in the corpus, cf. Section 2.2). For the latter case, we select all examples containing at least one unknown entity (labeled UNK), resulting in a subset of 41, 897 sentences, and repeat the evaluation steps described above. Here, we have to consider a much larger set of possible classes per entity (the 20 overall most frequent classes). The MFC baseline for these cases is the most frequent of the 20 classes for UNK tokens, while the random baseline chooses randomly from that set. 3.2 Generalization Generalization measures how widely applicable the produced propositions are. A completely lexical ap2Unfortunately, if judged insensible, we can not infer whether our model used the wrong class despite better options, or whether we simply have not learned the correct label. entropy Page 1 full data set unknown entities 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.04 0.01 0.12 0.09 0.25 0.66 Generalization random MFC model Figure 4: Generalization of models on the data sets proach, at one extreme, would turn each sentence into a separate proposition, thus achieving a generalization of 0%. At the other extreme, a model that produces only one proposition would generalize extremely well (but would fail to explain the data in any meaningful way). Both are of course not desirable. We define generalization as g = 1 −|propositions| |sentences| (2) The results in Figure 4 show that our model is capable of abstracting away from the lexical form, achieving a generalization rate of 25% for the full data set. The baseline approaches do significantly worse, since they are unable to detect similarities between lexically different examples, and thus create more propositions. Using a two-tailed t-test, the difference between our model and each baseline is statistically significant at p < .001. Generalization on the unknown entity data set is even higher (65.84%). The difference between the model and the baselines is again statistically significant at p < .001. MFC always chooses the same class for UNK, regardless of context, and performs much worse. The random baseline chooses between 20 classes per entity instead of 3, and is thus even less general. 3.3 Normalized Entropy Entropy is used in information theory to measure how predictable data is. 0 means the data is completely predictable. The higher the entropy of our propositions, the less well they explain the data. We are looking for models with low entropy. The extreme case of only one proposition has 0 entropy: 1470 entropy Page 1 full data set unknown entities 0.00 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00 1.00 1.00 0.99 0.99 0.89 0.50 Normalized Entropy random MFC model Figure 5: Entropy of models on the data sets we know exactly which sentences are produced by the proposition. Entropy is directly influenced by the number of propositions used by a system.3 In order to compare different models, we thus define normalized entropy as HN = − nP i=0 Pi · log Pi log n (3) where Pi is the coverage of the proposition, or the percentage of sentences explained by it, and n is the number of distinct propositions. The entropy of our model on the full data set is relatively high with 0.89, see Figure 5. The best entropy we can hope to achieve given the number of propositions and sentences is actually 0.80 (by concentrating the maximum probability mass in one proposition). The model thus does not perform as badly as the number might suggest. The entropy of our model on unseen data is better, with 0.50 (best possible: 0.41). This might be due to the fact that we considered more classes for UNK than for regular entities. 3.4 Perplexity Since we assume that propositions are valid sentences in our domain, good propositions should have a higher probability than bad propositions in a language model. We can compute this using perplex3Note that how many classes we consider per entity influences how many propositions are produced (cf. Section 2.2), and thus indirectly puts a bound on entropy. entropy Page 1 full data set unknown entities 50.00 51.00 52.00 53.00 54.00 55.00 56.00 57.00 58.00 59.00 60.00 59.52 57.03 57.03 57.15 56.84 54.92 Perplexity random MFC model Figure 6: Perplexity of models on the data sets ity:4 perplexity(data) = 2 −log P (data) n (4) where P(data) is the product of the proposition probabilities, and n is the number of propositions. We use the uni-, bi-, and trigram counts of the GoogleGrams corpus (Brants and Franz, 2006) with simple interpolation to compute the probability of each proposition. The results in Figure 6 indicate that the propositions found by the model are preferable to the ones found by the baselines. As would be expected, the sensibility judgements for MFC and model5 (Tables 1 and 2, Section 3.5) are perfectly anti-correlated (correlation coefficient −1) with the perplexity for these systems in each data set. However, due to the small sample size, this should be interpreted cautiously. 3.5 Sensibility and Label Accuracy In unsupervised training, the model with the best data likelihood does not necessarily produce the best label accuracy. We evaluate label accuracy by presenting subjects with the propositions we obtained from the Viterbi decoding of the corpus, and ask them to rate their sensibility. We compare the different systems by computing sensibility as the percentage of propositions judged sensible for each system. Since the underlying probability distributions are quite different, we weight the sensibility judgement for each proposition by the likelihood of that proposition. We report results for both aggregate 4Perplexity also quantifies the uncertainty of the resulting propositions, where 0 perplexity means no uncertainty. 5We did not collect sensibility judgements for the random baseline. 1471 accuracy Page 1 System 90.16 92.13 69.35 70.57 88.84 90.37 94.28 96.55 70.93 70.45 93.06 95.16 100 most frequent random combined Data set agg maj agg maj agg maj full baseline model Table 1: Percentage of propositions derived from labeling the full data set that were judged sensible accuracy Page 1 System 51.92 51.51 32.39 28.21 50.39 49.66 66.00 69.57 48.14 41.74 64.83 67.76 100 most frequent random combined Data set agg maj agg maj agg maj unknown baseline model Table 2: Percentage of propositions derived from labeling unknown entities that were judged sensible sensibility (using the total number of individual answers), and majority sensibility, where each proposition is scored according to the majority of annotators’ decisions. The model and baseline propositions for the full data set are both judged highly sensible, achieving accuracies of 96.6% and 92.1% (cf. Table 1). While our model did slightly better, the differences are not statistically significant when using a two-tailed test. The propositions produced by the model from unknown entities are less sensible (67.8%), albeit still significantly above chance level, and the baseline propositions for the same data set (p < 0.01). Only 49.7% propositions of the baseline were judged sensible (cf. Table 2). 3.5.1 Annotation Task Our model finds 250, 169 distinct propositions, the MFC baseline 293, 028. We thus have to restrict ourselves to a subset in order to judge their sensibility. For each system, we sample the 100 most frequent propositions and 100 random propositions found for both the full data set and the unknown entities6 and have 10 annotators rate each proposition as sensible or insensible. To identify and omit bad annotators (‘spammers’), we use the method described in Section 3.5.2, and measure inter-annotator agreement as described in Section 3.5.3. The details of this evaluation are given below, the results can be found in Tables 1 and 2. The 200 propositions from each of the four sys6We omit the random baseline here due to size issues, and because it is not likely to produce any informative comparison. tems (model and baseline on both full and unknown data set), contain 696 distinct propositions. We break these up into 70 batches (Amazon Turk annotation HIT pages) of ten propositions each. For each proposition, we request 10 annotators. Overall, 148 different annotators participated in our annotation. The annotators are asked to state whether each proposition represents a sensible statement about American Football or not. A proposition like “Quarterbacks can throw passes to receivers” should make sense, while “Coaches can intercept teams” does not. To ensure that annotators judge sensibility and not grammaticality, we format each proposition the same way, namely pluralizing the nouns and adding “can” before the verb. In addition, annotators can state whether a proposition sounds odd, seems ungrammatical, is a valid sentence, but against the rules (e.g., “Coaches can hit players”) or whether they do not understand it. 3.5.2 Spammers Some (albeit few) annotators on Mechanical Turk try to complete tasks as quickly as possible without paying attention to the actual requirements, introducing noise into the data. We have to identify these spammers before the evaluation. One way is to include tests. Annotators that fail these tests will be excluded. We use a repetition (first and last question are the same), and a truism (annotators answering ”no” either do not know about football or just answered randomly). Alternatively, we can assume that good annotators, who are the majority, will exhibit similar behavior to one another, while spam1472 mers exhibit a deviant answer pattern. To identify those outliers, we compare each annotator’s agreement to the others and exclude those whose agreement falls more than one standard deviation below the average overall agreement. We find that both methods produce similar results. The first method requires more careful planning, and the resulting set of annotators still has to be checked for outliers. The second method has the advantage that it requires no additional questions. It includes the risk, though, that one selects a set of bad annotators solely because they agree with one another. 3.5.3 Agreement agreement Page 1 0.88 0.76 0.82 ! 0.45 0.50 0.48 0.66 0.53 0.58 measure 100 most frequent random combined agreement G-index Table 3: Agreement measures for different samples We use inter-annotator agreement to quantify the reliability of the judgments. Apart from the simple agreement measure, which records how often annotators choose the same value for an item, there are several statistics that qualify this measure by adjusting for other factors. One frequently used measure, Cohen’s κ, has the disadvantage that if there is prevalence of one answer, κ will be low (or even negative), despite high agreement (Feinstein and Cicchetti, 1990). This phenomenon, known as the κ paradox, is a result of the formula’s adjustment for chance agreement. As shown by Gwet (2008), the true level of actual chance agreement is realistically not as high as computed, resulting in the counterintuitive results. We include it for comparative reasons. Another statistic, the G-index (Holley and Guilford, 1964), avoids the paradox. It assumes that expected agreement is a function of the number of choices rather than chance. It uses the same general formula as κ, (Pa −Pe) (1 −Pe) (5) where Pa is the actual raw agreement measured, and Pe is the expected agreement. The difference with κ is that Pe for the G-index is defined as Pe = 1/q, where q is the number of available categories, instead of expected chance agreement. Under most conditions, G and κ are equivalent, but in the case of high raw agreement and few categories, G gives a more accurate estimation of the agreement. We thus report raw agreement, κ, and G-index. Despite early spammer detection, there are still outliers in the final data, which have to be accounted for when calculating agreement. We take the same approach as in the statistical spammer detection and delete outliers that are more than one standard deviation below the rest of the annotators’ average. The raw agreement for both samples combined is 0.82, G = 0.58, and κ = 0.48. The numbers show that there is reasonably high agreement on the label accuracy. 4 Related Research The approach we describe is similar in nature to unsupervised verb argument selection/selectional preferences and semantic role labeling, yet goes beyond it in several ways. For semantic role labeling (Gildea and Jurafsky, 2002; Fleischman et al., 2003), classes have been derived from FrameNet (Baker et al., 1998). For verb argument detection, classes are either semi-manually derived from a repository like WordNet, or from NE taggers (Pardo et al., 2006; Fan et al., 2010). This allows for domain-independent systems, but limits the approach to a fixed set of oftentimes rather inappropriate classes. In contrast, we derive the level of granularity directly from the data. Pre-tagging the data with NE classes before training comes at a cost. It lumps entities together which can have very different classes (i.e., all people become labeled as PERSON), effectively allowing only one class per entity. Etzioni et al. (2005) resolve the problem with a web-based approach that learns hierarchies of the NE classes in an unsupervised manner. We do not enforce a taxonomy, but include statistical knowledge about the distribution of possible classes over each entity by incorporating a prior distribution P(class, entity). This enables us to generalize from the lexical form without restricting ourselves to one class per entity, which helps to better fit the data. In addition, we can distinguish several classes for each entity, depending on the context 1473 (e.g., winner vs. quarterback). Ritter et al. (2010) also use an unsupervised model to derive selectional predicates from unlabeled text. They do not assign classes altogether, but group similar predicates and arguments into unlabeled clusters using LDA. Brody (2007) uses a very similar methodology to establish relations between clauses and sentences, by clustering simplified propositions. Pe˜nas and Hovy (2010) employ syntactic patterns to derive classes from unlabeled data in the context of LbR. They consider a wider range of syntactic structures, but do not include a probabilistic model to label new data. 5 Conclusion We use an unsupervised model to infer domainspecific classes from a corpus of 1.4m unlabeled sentences, and applied them to learn 250k propositions about American football. Unlike previous approaches, we use automatically extracted classes with a probability distribution over entities to allow for context-sensitive selection of appropriate classes. We evaluate both the model qualities and sensibility of the resulting propositions. Several measures show that the model has good explanatory power and generalizes well, significantly outperforming two baseline approaches, especially where the possible classes of an entity can only be inferred from the context. Human subjects on Amazon’s Mechanical Turk judged up to 96.6% of the propositions for the full data set, and 67.8% for data containing unseen entities as sensible. Inter-annotator agreement was reasonably high (agreement = 0.82, G = 0.58, κ = 0.48). The probabilistic model and the extracted propositions can be used to enrich texts and support postparsing inference for question answering. We are currently applying our method to other domains. Acknowledgements We would like to thank David Chiang, Victoria Fossum, Daniel Marcu, and Stephen Tratz, as well as the anonymous ACL reviewers for comments and suggestions to improve the paper. Research supported in part by Air Force Contract FA8750-09-C-0172 under the DARPA Machine Reading Program. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of the 17th international conference on Computational linguistics-Volume 1, pages 86–90. Association for Computational Linguistics Morristown, NJ, USA. Thorsten Brants and Alex Franz, editors. 2006. The Google Web 1T 5-gram Corpus Version 1.1. Number LDC2006T13. Linguistic Data Consortium, Philadelphia. Samuel Brody. 2007. Clustering Clauses for HighLevel Relation Detection: An Information-theoretic Approach. In Annual Meeting-Association for Computational Linguistics, volume 45, page 448. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In LREC 2006. Citeseer. Arthur P. Dempster, Nan M. Laird, and Donald B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1–38. Oren Etzioni, Michael Cafarella, Doug. Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial Intelligence, 165(1):91–134. James Fan, David Ferrucci, David Gondek, and Aditya Kalyanpur. 2010. Prismatic: Inducing knowledge from a large scale lexicalized relation resource. In Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading, pages 122–127, Los Angeles, California, June. Association for Computational Linguistics. Alvan R. Feinstein and Domenic V. Cicchetti. 1990. High agreement but low kappa: I. the problems of two paradoxes. Journal of Clinical Epidemiology, 43(6):543–549. Michael Fleischman, Namhee Kwon, and Eduard Hovy. 2003. Maximum entropy models for FrameNet classification. In Proceedings of EMNLP, volume 3. Danies Gildea and Dan Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Kilem Li Gwet. 2008. Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology, 61(1):29–48. 1474 Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th conference on Computational linguistics-Volume 2, pages 539–545. Association for Computational Linguistics. Jasper Wilson Holley and Joy Paul Guilford. 1964. A Note on the G-Index of Agreement. Educational and Psychological Measurement, 24(4):749. Rutu Mulkar-Mehta, James Allen, Jerry Hobbs, Eduard Hovy, Bernardo Magnini, and Christopher Manning, editors. 2010. Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading. Association for Computational Linguistics, Los Angeles, California, June. Thiago Pardo, Daniel Marcu, and Maria Nunes. 2006. Unsupervised Learning of Verb Argument Structures. Computational Linguistics and Intelligent Text Processing, pages 59–70. Anselmo Pe˜nas and Eduard Hovy. 2010. Semantic enrichment of text with background knowledge. In Proceedings of the NAACL HLT 2010 First International Workshop on Formalisms and Methodology for Learning by Reading, pages 15–23, Los Angeles, California, June. Association for Computational Linguistics. Simone Paolo Ponzetto and Roberto Navigli. 2010. Knowledge-rich Word Sense Disambiguation rivaling supervised systems. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1522–1531. Association for Computational Linguistics. Alan Ritter, Mausam, and Oren Etzioni. 2010. A latent dirichlet allocation method for selectional preferences. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 424–434, Uppsala, Sweden, July. Association for Computational Linguistics. Evan Sandhaus, editor. 2008. The New York Times Annotated Corpus. Number LDC2008T19. Linguistic Data Consortium, Philadelphia. Rion Snow, Brendan O’Connor, Dan Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 254–263. Association for Computational Linguistics. Stephanie Strassel, Dan Adams, Henry Goldberg, Jonathan Herr, Ron Keesing, Daniel Oblinger, Heather Simpson, Robert Schrag, and Jonathan Wright. 2010. The DARPA Machine Reading Program-Encouraging Linguistic and Reasoning Research with a Series of Reading Tasks. In Proceedings of LREC 2010. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697–706. ACM. 1475
2011
147
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1476–1485, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Latent Semantic Word Sense Induction and Disambiguation Tim Van de Cruys RCEAL University of Cambridge United Kingdom [email protected] Marianna Apidianaki Alpage, INRIA & Univ Paris Diderot Sorbonne Paris Cit´e, UMRI-001 75013 Paris, France [email protected] Abstract In this paper, we present a unified model for the automatic induction of word senses from text, and the subsequent disambiguation of particular word instances using the automatically extracted sense inventory. The induction step and the disambiguation step are based on the same principle: words and contexts are mapped to a limited number of topical dimensions in a latent semantic word space. The intuition is that a particular sense is associated with a particular topic, so that different senses can be discriminated through their association with particular topical dimensions; in a similar vein, a particular instance of a word can be disambiguated by determining its most important topical dimensions. The model is evaluated on the SEMEVAL-2010 word sense induction and disambiguation task, on which it reaches stateof-the-art results. 1 Introduction Word sense induction (WSI) is the task of automatically identifying the senses of words in texts, without the need for handcrafted resources or manually annotated data. The manual construction of a sense inventory is a tedious and time-consuming job, and the result is highly dependent on the annotators and the domain at hand. By applying an automatic procedure, we are able to only extract the senses that are objectively present in a particular corpus, and it allows for the sense inventory to be straightforwardly adapted to a new domain. Word sense disambiguation (WSD), on the other hand, is the closely related task of assigning a sense label to a particular instance of a word in context, using an existing sense inventory. The bulk of WSD algorithms up till now use pre-defined sense inventories (such as WordNet) that often contain finegrained sense distinctions, which poses serious problems for computational semantic processing (Ide and Wilks, 2007). Moreover, most WSD algorithms take a supervised approach, which requires a significant amount of manually annotated training data. The model presented here induces the senses of words in a fully unsupervised way, and subsequently uses the induced sense inventory for the unsupervised disambiguation of particular occurrences of words. The induction step and the disambiguation step are based on the same principle: words and contexts are mapped to a limited number of topical dimensions in a latent semantic word space. The key idea is that the model combines tight, synonymlike similarity (based on dependency relations) with broad, topical similarity (based on a large ‘bag of words’ context window). The intuition in this is that the dependency features can be disambiguated by the topical dimensions identified by the broad contextual features; in a similar vein, a particular instance of a word can be disambiguated by determining its most important topical dimensions (based on the instance’s context words). The paper is organized as follows. Section 2 presents some previous research on distributional similarity and word sense induction. Section 3 gives an overview of our method for word sense induction and disambiguation. Section 4 provides a quantitative evaluation and comparison to other algorithms in the framework of the SEMEVAL-2010 word sense 1476 induction and disambiguation (WSI/WSD) task. The last section draws conclusions, and lays out a number of future research directions. 2 Previous Work 2.1 Distributional similarity According to the distributional hypothesis of meaning (Harris, 1954), words that occur in similar contexts tend to be semantically similar. In the spirit of this by now well-known adage, numerous algorithms have sprouted up that try to capture the semantics of words by looking at their distribution in texts, and comparing those distributions in a vector space model. One of the best known models in this respect is latent semantic analysis — LSA (Landauer and Dumais, 1997; Landauer et al., 1998). In LSA, a termdocument matrix is created, that contains the frequency of each word in a particular document. This matrix is then decomposed into three other matrices with a mathematical factorization technique called singular value decomposition (SVD). The most important dimensions that come out of the SVD are said to represent latent semantic dimensions, according to which nouns and documents can be represented more efficiently. Our model also applies a factorization technique (albeit a different one) in order to find a reduced semantic space. Context is a determining factor in the nature of the semantic similarity that is induced. A broad context window (e.g. a paragraph or document) yields broad, topical similarity, whereas a small context yields tight, synonym-like similarity. This has lead a number of researchers to use the dependency relations that a particular word takes part in as contextual features. One of the most important approaches is Lin (1998). An overview of dependency-based semantic space models is given in Pad´o and Lapata (2007). 2.2 Word sense induction The following paragraphs provide a succinct overview of word sense induction research. A thorough survey on word sense disambiguation (including unsupervised induction algorithms) is presented in Navigli (2009). Algorithms for word sense induction can roughly be divided into local and global ones. Local WSI algorithms extract the different senses of a word on a per-word basis, i.e. the different senses for each word are determined separately. They can be further subdivided into context-clustering algorithms and graph-based algorithms. In the context-clustering approach, context vectors are created for the different instances of a particular word, and those contexts are grouped into a number of clusters, representing the different senses of the word. The context vectors may be represented as first or secondorder co-occurrences (i.e. the contexts of the target word are similar if the words they in turn co-occur with are similar). The first one to propose this idea of context-group discrimination was Sch¨utze (1998), and many researchers followed a similar approach to sense induction (Purandare and Pedersen, 2004). In the graph-based approach, on the other hand, a co-occurrence graph is created, in which nodes represent words, and edges connect words that appear in the same context (dependency relation or context window). The senses of a word may then be discovered using graph clustering techniques (Widdows and Dorow, 2002), or algorithms such as HyperLex (V´eronis, 2004) or Pagerank (Agirre et al., 2006). Finally, Bordag (2006) recently proposed an approach that uses word triplets to perform word sense induction. The underlying idea is the ‘one sense per collocation’ assumption, and co-occurrence triplets are clustered based on the words they have in common. Global algorithms take an approach in which the different senses of a particular word are determined by comparing them to, and demarcating them from, the senses of other words in a full-blown word space model. The best known global approach is the one by Pantel and Lin (2002). They present a global clustering algorithm – coined clustering by committee (CBC) – that automatically discovers word senses from text. The key idea is to first discover a set of tight, unambiguous clusters, to which possibly ambiguous words can be assigned. Once a word has been assigned to a cluster, the features associated with that particular cluster are stripped off the word’s vector. This way, less frequent senses of the word may be discovered. Van de Cruys (2008) proposes a model for sense induction based on latent semantic dimensions. Using an extension of non-negative matrix factoriza1477 tion, the model induces a latent semantic space according to which both dependency features and broad contextual features are classified. Using the latent space, the model is able to discriminate between different word senses. The model presented below is an extension of this approach: whereas the model described in Van de Cruys (2008) is only able to perform word sense induction, our model is capable of performing both word sense induction and disambiguation. 3 Methodology 3.1 Non-negative Matrix Factorization Our model uses non-negative matrix factorization – NMF (Lee and Seung, 2000) in order to find latent dimensions. There are a number of reasons to prefer NMF over the better known singular value decomposition used in LSA. First of all, NMF allows us to minimize the Kullback-Leibler divergence as an objective function, whereas SVD minimizes the Euclidean distance. The Kullback-Leibler divergence is better suited for language phenomena. Minimizing the Euclidean distance requires normally distributed data, and language phenomena are typically not normally distributed. Secondly, the non-negative nature of the factorization ensures that only additive and no subtractive relations are allowed. This proves particularly useful for the extraction of semantic dimensions, so that the NMF model is able to extract much more clear-cut dimensions than an SVD model. And thirdly, the non-negative property allows the resulting model to be interpreted probabilistically, which is not straightforward with an SVD factorization. The key idea is that a non-negative matrix A is factorized into two other non-negative matrices, W and H Ai×j ≈Wi×kHk×j (1) where k is much smaller than i, j so that both instances and features are expressed in terms of a few components. Non-negative matrix factorization enforces the constraint that all three matrices must be non-negative, so all elements must be greater than or equal to zero. Using the minimization of the Kullback-Leibler divergence as an objective function, we want to find the matrices W and H for which the KullbackLeibler divergence between A and WH (the multiplication of W and H) is the smallest. This factorization is carried out through the iterative application of update rules. Matrices W and H are randomly initialized, and the rules in 2 and 3 are iteratively applied – alternating between them. In each iteration, each vector is adequately normalized, so that all dimension values sum to 1. Haµ ←Haµ P i Wia Aiµ (WH)iµ P k Wka (2) Wia ←Wia P µ Haµ Aiµ (WH)iµ P v Hav (3) 3.2 Word sense induction Using an extension of non-negative matrix factorization, we are able to jointly induce latent factors for three different modes: words, their window-based (‘bag of words’) context words, and their dependency relations. Three matrices are constructed that capture the pairwise co-occurrence frequencies for the different modes. The first matrix contains cooccurrence frequencies of words cross-classified by dependency relations, the second matrix contains co-occurrence frequencies of words cross-classified by words that appear in the noun’s context window, and the third matrix contains co-occurrence frequencies of dependency relations cross-classified by cooccurring context words. NMF is then applied to the three matrices and the separate factorizations are interleaved (i.e. the results of the former factorization are used to initialize the factorization of the next matrix). A graphical representation of the interleaved factorization algorithm is given in figure 1. The procedure of the algorithm goes as follows. First, matrices W, H, G, and F are randomly initialized. We then start our first iteration, and compute the update of matrix W (using equation 3). Matrix W is then copied to matrix V, and the update of matrix G is computed (using equation 2). The transpose of matrix G is again copied to matrix U, and the update of F is computed (again using equation 2). As a last step, matrix F is copied to matrix H, and we restart the iteration loop until a stopping criterion (e.g. a maximum number of iterations, or no more significant change in objective function; we used the 1478 = x W H = x V G = x U F j i s k i j k A words x dependency relations B words x context words C context words x dependency relations k k k k i j i s j s s Figure 1: A graphical representation of the interleaved NMF algorithm former one) is reached.1 When the factorization is finished, the three different modes (words, windowbased context words and dependency relations) are all represented according to a limited number of latent factors. Next, the factorization that is thus created is used for word sense induction. The intuition is that a particular, dominant dimension of an ambiguous word is ‘switched off’, in order to reveal other possible senses of the word. Formally, we proceed as follows. Matrix H indicates the importance of each dependency relation given a topical dimension. With this knowledge, the dependency relations that are responsible for a certain dimension can be subtracted from the original noun vector. This is done by scaling down each feature of the original vector according to the load of the feature on the subtracted dimension, using equation 4. t = v(u1 −hk) (4) Equation 4 multiplies each dependency feature of the original noun vector v with a scaling factor, according to the load of the feature on the subtracted dimension (hk – the vector of matrix H that corresponds to the dimension we want to subtract). u1 is a vector of ones with the same length as hk. The result is vector t, in which the dependency features rel1Note that this is not the only possibly way of interleaving the different factorizations, but in our experiments we found that different constellations lead to similar results. evant to the particular topical dimension have been scaled down. In order to determine which dimension(s) are responsible for a particular sense of the word, the method is embedded in a clustering approach. First, a specific word is assigned to its predominant sense (i.e. the most similar cluster). Next, the dominant semantic dimension(s) for this cluster are subtracted from the word vector, and the resulting vector is fed to the clustering algorithm again, to see if other word senses emerge. The dominant semantic dimension(s) can be identified by folding vector c – representing the cluster centroid – into the factorization (equation 5). This yields a probability vector b over latent factors for the particular centroid. b = cHT (5) A simple k-means algorithm is used to compute the initial clustering, using the non-factorized dependency-based feature vectors (matrix A). kmeans yields a hard clustering, in which each noun is assigned to exactly one (dominant) cluster. In the second step, we determine for each noun whether it can be assigned to other, less dominant clusters. First, the salient dimension(s) of the centroid to which the noun is assigned are determined. The centroid of the cluster is computed by averaging the frequencies of all cluster elements except for the target word we want to reassign. After subtracting the salient dimensions from the noun vector, we check whether the vector is reassigned to another cluster centroid. If this is the case, (another instance of) the noun is assigned to the cluster, and the second step is repeated. If there is no reassignment, we continue with the next word. The target element is removed from the centroid to make sure that only the dimensions associated with the sense of the cluster are subtracted. When the algorithm is finished, each noun is assigned to a number of clusters, representing its different senses. We use two different methods for selecting the final number of candidate senses. The first method, NMFcon, takes a conservative approach, and only selects candidate senses if – after the subtraction of salient dimensions – another sense is found that is more similar2 to the adapted noun vector than the 2We use the cosine measure for our similarity calculations. 1479 dominant sense. The second method, NMFlib, is more liberal, and also selects the next best cluster centroid as candidate sense until a certain similarity threshold φ is reached.3 3.3 Word sense disambiguation The sense inventory that results from the induction step can now be used for the disambiguation of individual instances as follows. For each instance of the target noun, we extract its context words, i.e. the words that co-occur in the same paragraph, and represent them as a probability vector f. Using matrix G from our factorization model (which represents context words by semantic dimensions), this vector can be folded into the semantic space, thus representing a probability vector over latent factors for the particular instance of the target noun (equation 6). d = fGT (6) Likewise, the candidate senses of the noun (represented as centroids) can be folded into our semantic space using matrix H (equation 5). This yields a probability distribution over the semantic dimensions for each centroid. As a last step, we compute the Kullback-Leibler divergence between the context vector and the candidate centroids, and select the candidate centroid that yields the lowest divergence as the correct sense. The disambiguation process is represented graphically in figure 2. 3.4 Example Let us clarify the process with an example for the noun chip. The sense induction algorithm finds the following candidate senses:4 1. cache, CPU, memory, microprocessor, processor, RAM, register 2. bread, cake, chocolate, cookie, recipe, sandwich 3. accessory, equipment, goods, item, machinery, material, product, supplies 3Experimentally (examining the cluster output), we set φ = 0.2 4Note that we do not use the word sense to hint at a lexicographic meaning distinction; rather, sense in this case should be regarded as a more coarse-grained and topic-related entity. G' k s s context vector k cluster centroid j cluster centroid j cluster centroid j H' k j k k k Figure 2: Graphical representation of the disambiguation process Each candidate sense is associated with a centroid (the average frequency vector of the cluster’s members), that is folded into the semantic space, which yields a ‘semantic fingerprint’, i.e. a distribution over the semantic dimensions. For the first sense, the ‘computer’ dimension will be the most important. Likewise, for the second and the third sense the ‘food’ dimension and the ‘manufacturing’ dimension will be the most important.5 Let us now take a particular instance of the noun chip, such as the one in (1). (1) An N.V. Philips unit has created a computer system that processes video images 3,000 times faster than conventional systems. Using reduced instruction - set computing, or RISC, chips made by Intergraph of Huntsville, Ala., the system splits the image it ‘sees’ into 20 digital representations, each processed by one chip. Looking at the context of the particular instance of chip, a context vector is created which represents the semantic content words that appear in the same paragraph (the extracted content words are printed in boldface). This context vector is again folded into the semantic space, yielding a distribution over the semantic dimensions. By selecting the lowest 5In the majority of cases, the induced dimensions indeed contain such clear-cut semantics, so that the dimensions can be rightfully labeled as above. 1480 Kullback-Leibler divergence between the semantic probability distribution of the target instance and the semantic probability distributions of the candidate senses, the algorithm is able to assign the ‘computer’ sense of the target noun chip. 4 Evaluation 4.1 Dataset Our word sense induction and disambiguation model is trained and tested on the dataset of the SEMEVAL-2010 WSI/WSD task (Manandhar et al., 2010). The SEMEVAL-2010 WSI/WSD task is based on a dataset of 100 target words, 50 nouns and 50 verbs. For each target word, a training set is provided from which the senses of the word have to be induced without using any other resources. The training set for a target word consists of a set of target word instances in context (sentences or paragraphs). The complete training set contains 879,807 instances, viz. 716,945 noun and 162,862 verb instances. The senses induced during training are used for disambiguation in the testing phase. In this phase, the system is provided with a test set that consists of unseen instances of the target words. The test set contains 8,915 instances in total, of which 5,285 nouns and 3,630 verbs. The instances in the test set are tagged with OntoNotes senses (Hovy et al., 2006). The system needs to disambiguate these instances using the senses acquired during training. 4.2 Implementational details The SEMEVAL training set has been part of speech tagged and lemmatized with the Stanford Part-OfSpeech Tagger (Toutanova and Manning, 2000; Toutanova et al., 2003) and parsed with MaltParser (Nivre et al., 2006), trained on sections 221 of the Wall Street Journal section of the Penn Treebank extended with about 4000 questions from the QuestionBank6 in order to extract dependency triples. The SEMEVAL test set has only been tagged and lemmatized, as our disambiguation model does not use dependency triples as features (contrary to the induction model). 6http://maltparser.org/mco/english_ parser/engmalt.html We constructed two different models – one for nouns and one for verbs. For each model, the matrices needed for our interleaved NMF factorization are extracted from the corpus. The noun model was built using 5K nouns, 80K dependency relations, and 2K context words (excluding stop words) with highest frequency in the training set, which yields matrices of 5K nouns × 80K dependency relations, 5K nouns × 2K context words, and 80K dependency relations × 2K context words. The model for verbs was constructed analogously, using 3K verbs, and the same number of dependency relations and context words. For our initial k-means clustering, we set k = 600 for nouns, and k = 400 for verbs. For the underlying interleaved NMF model, we used 50 iterations, and factored the model to 50 dimensions. 4.3 Evaluation measures The results of the systems participating in the SEMEVAL-2010 WSI/WSD task are evaluated both in a supervised and in an unsupervised manner. The supervised evaluation in the SEMEVAL-2010 WSI/WSD task follows the scheme of the SEMEVAL2007 WSI task (Agirre and Soroa, 2007), with some modifications. One part of the test set is used as a mapping corpus, which maps the automatically induced clusters to gold standard senses; the other part acts as an evaluation corpus. The mapping between clusters and gold standard senses is used to tag the evaluation corpus with gold standard tags. The systems are then evaluated as in a standard WSD task, using recall. In the unsupervised evaluation, the induced senses are evaluated as clusters of instances which are compared to the sets of instances tagged with the gold standard senses (corresponding to classes). Two partitions are thus created over the test set of a target word: a set of automatically generated clusters and a set of gold standard classes. A number of these instances will be members of both one gold standard class and one cluster. Consequently, the quality of the proposed clustering solution is evaluated by comparing the two groupings and measuring their similarity. Two evaluation metrics are used during the unsupervised evaluation in order to estimate the quality of the clustering solutions, the V-Measure (Rosenberg and Hirschberg, 2007) and the paired F1481 Score (Artiles et al., 2009). V-Measure assesses the quality of a clustering by measuring its homogeneity (h) and its completeness (c). Homogeneity refers to the degree that each cluster consists of data points primarily belonging to a single gold standard class, while completeness refers to the degree that each gold standard class consists of data points primarily assigned to a single cluster. V-Measure is the harmonic mean of h and c. V M = 2 · h · c h + c (7) In the paired F-Score (Artiles et al., 2009) evaluation, the clustering problem is transformed into a classification problem (Manandhar et al., 2010). A set of instance pairs is generated from the automatically induced clusters, which comprises pairs of the instances found in each cluster. Similarly, a set of instance pairs is created from the gold standard classes, containing pairs of the instances found in each class. Precision is then defined as the number of common instance pairs between the two sets to the total number of pairs in the clustering solution (cf. formula 8). Recall is defined as the number of common instance pairs between the two sets to the total number of pairs in the gold standard (cf. formula 9). Precision and recall are finally combined to produce the harmonic mean (cf. formula 10). P = |F(K) ∩F(S)| |F(K)| (8) R = |F(K) ∩F(S)| |F(S)| (9) FS = 2 · P · R P + R (10) The obtained results are also compared to two baselines. The most frequent sense (MFS) baseline groups all testing instances of a target word into one cluster. The Random baseline randomly assigns an instance to one of the clusters.7 This baseline is executed five times and the results are averaged. 7The number of clusters in Random was chosen to be roughly equal to the average number of senses in the gold standard. 4.4 Results 4.4.1 Unsupervised evaluation In table 1, we present the performance of a number of algorithms on the V-measure. We compare our V-measure scores with the scores of the best-ranked systems in the SEMEVAL 2010 WSI/WSD task, both for the complete data set and for nouns and verbs separately. The fourth column shows the average number of clusters induced in the test set by each algorithm. The MFS baseline has a V-Measure equal to 0, since by definition its completeness is 1 and its homogeneity is 0. NMFcon – our model that takes a conservative approach in the induction of candidate senses – does not beat the random baseline. NMFlib – our model that is more liberal in inducing senses – reaches better results. With 11.8%, it scores similar to other algorithms that induce a similar average number of clusters, such as Duluth-WSI (Pedersen, 2010). Pedersen (2010) has shown that the V-Measure tends to favour systems producing a higher number of clusters than the number of gold standard senses. This is reflected in the scores of our models as well. VM (%) all noun verb #cl Hermit 16.2 16.7 15.6 10.78 UoY 15.7 20.6 8.5 11.54 KSU KDD 15.7 18.0 12.4 17.50 NMFlib 11.8 13.5 9.4 4.80 Duluth-WSI 9.0 11.4 5.7 4.15 Random 4.4 4.2 4.6 4.00 NMFcon 3.9 3.9 3.9 1.58 MFS 0.0 0.0 0.0 1.00 Table 1: Unsupervised V-measure evaluation on SEMEVAL test set Motivated by the large divergences in the system rankings on the different metrics used in the SEMEVAL-2010 WSI/WSD task, Pedersen evaluated the metrics themselves. His evaluation relied on the assumption that a good measure should assign low scores to random baselines. Pedersen showed that the V-Measure continued to improve as randomness increased. We agree with Pedersen’s conclusion that the V-Measure results should be interpreted with caution, but we still report the results in order 1482 to perform a global comparison, on all metrics, of our system’s performance to the systems that participated to the SEMEVAL task. Contrary to V-Measure, paired F-score is a fairly reliable measure and the only one that managed to identify and expose random baselines in the above mentioned metric evaluation. This means that the random systems used for testing were ranked low when a high number of random senses was used. In table 2, the paired F-Score of a number of algorithms is given. The paired F-Score penalizes systems when they produce a higher number of clusters (low recall) or a lower number of clusters (low precision) than the gold standard number of senses. We again compare our results with the scores of the bestranked systems in the SEMEVAL-2010 WSI/WSD TASK. FS (%) all noun verb #cl MFS 63.5 57.0 72.7 1.00 Duluth-WSI-SVD-Gap 63.3 57.0 72.4 1.02 NMFcon 60.2 54.6 68.4 1.58 NMFlib 45.3 42.2 49.8 5.42 Duluth-WSI 41.1 37.1 46.7 4.15 Random 31.9 30.4 34.1 4.00 Table 2: Unsupervised paired F-score evaluation on SEMEVAL testset NMFcon reaches a score of 60.2%, which is again similar to other algorithms that induce the same average number of clusters. NMFlib scores 45.3%, indicating that the algorithm is able to retain a reasonable F-Score while at the same time inducing a significant number of clusters. This especially becomes clear when comparing its score to the other algorithms. 4.4.2 Supervised evaluation In the supervised evaluation, the automatically induced clusters are mapped to gold standard senses, using the mapping corpus (i.e. one part of the test set). The obtained mapping is used to tag the evaluation corpus (i.e. the other part of the test set) with gold standard tags, which means that the methods are evaluated in a standard WSD task. Table 3 shows the recall of our algorithms in the supervised evaluation, again compared to other algorithms evaluated in the SEMEVAL-2010 WSI/WSD task. SR (%) all noun verb #S NMFlib 62.6 57.3 70.2 1.82 UoY 62.4 59.4 66.8 1.51 Duluth-WSI 60.5 54.7 68.9 1.66 NMFcon 60.3 54.5 68.8 1.21 MFS 58.7 53.2 66.6 1.00 Random 57.3 51.5 65.7 1.53 Table 3: Supervised recall for SEMEVAL testset, 80% mapping, 20% evaluation NMFlib gets 62.6%, which makes it the best scoring algorithm on the supervised evaluation. NMFcon reaches 60.3%, which again indicates that it is in the same ballpark as other algorithms that induce a similar average number of senses. Some doubts have been cast on the representativeness of the supervised recall results as well. According to Pedersen (2010), the supervised learning algorithm that underlies this evaluation method tends to converge to the Most Frequent Sense (MFS) baseline, because the number of senses that the classifier assigns to the test instances is rather low. We think these shortcomings indicate the need for the development of new evaluation metrics, capable of providing a more accurate evaluation of the performance of WSI systems. Nevertheless, these metrics still constitute a useful testbed for comparing the performance of different systems. 5 Conclusion and future work In this paper, we presented a model based on latent semantics that is able to perform word sense induction as well as disambiguation. Using latent topical dimensions, the model is able to discriminate between different senses of a word, and subsequently disambiguate particular instances of a word. The evaluation results indicate that our model reaches state-of-the-art performance compared to other systems that participated in the SEMEVAL-2010 word sense induction and disambiguation task. Moreover, our global approach is able to reach similar performance on an evaluation set that is tuned to fit the needs of local approaches. The evaluation set con1483 tains an enormous amount of contexts for only a small number of target words, favouring methods that induce senses on a per-word basis. A global approach like ours is likely to induce a more balanced sense inventory using an unbiased corpus, and is likely to outperform local methods when such an unbiased corpus is used as input. We therefore think that the global, unified approach to word sense induction and disambiguation presented here provides a genuine and powerful solution to the problem at hand. We conclude with some issues for future work. First of all, we would like to evaluate the approach presented here using a more balanced and unbiased corpus, and compare its performance on such a corpus to local approaches. Secondly, we would also like to include grammatical dependency information in the disambiguation step of the algorithm. For now, the disambiguation step only uses a word’s context words; enriching the feature set with dependency information is likely to improve the performance of the disambiguation. Acknowledgments This work is supported by the Scribo project, funded by the French ‘pˆole de comp´etitivit´e’ System@tic, and by the French national grant EDyLex (ANR-09CORD-008). References Eneko Agirre and Aitor Soroa. 2007. SemEval-2007 Task 02: Evaluating word sense induction and discrimination systems. In Proceedings of the fourth International Workshop on Semantic Evaluations (SemEval), ACL, pages 7–12, Prague, Czech Republic. Eneko Agirre, David Mart´ınez, Ojer L´opez de Lacalle, and Aitor Soroa. 2006. Two graph-based algorithms for state-of-the-art WSD. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-06), pages 585–593, Sydney, Australia. Marianna Apidianaki and Tim Van de Cruys. 2011. A Quantitative Evaluation of Global Word Sense Induction. In Proceedings of the 12th International Conference on Intelligent Text Processing and Computational Linguistics (CICLing), published in Springer Lecture Notes in Computer Science (LNCS), volume 6608, pages 253–264, Tokyo, Japan. Javier Artiles, Enrique Amig´o, and Julio Gonzalo. 2009. The role of named entities in web people search. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-09), pages 534–542, Singapore. Stefan Bordag. 2006. Word sense induction: Tripletbased clustering and automatic evaluation. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-06), pages 137–144, Trento, Italy. Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146–162. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the Human Language Technology / North American Association of Computational Linguistics conference (HLT-NAACL06), pages 57–60, New York, NY. Nancy Ide and Yorick Wilks. 2007. Making Sense About Sense. In Eneko Agirre and Philip Edmonds, editors, Word Sense Disambiguation, Algorithms and Applications, pages 47–73. Springer. Thomas Landauer and Susan Dumais. 1997. A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychology Review, 104:211–240. Thomas Landauer, Peter Foltz, and Darrell Laham. 1998. An Introduction to Latent Semantic Analysis. Discourse Processes, 25:295–284. Daniel D. Lee and H. Sebastian Seung. 2000. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems, volume 13, pages 556–562. Dekang Lin. 1998. Automatic Retrieval and Clustering of Similar Words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL98), volume 2, pages 768–774, Montreal, Quebec, Canada. Suresh Manandhar, Ioannis P. Klapaftis, Dmitriy Dligach, and Sameer S. Pradhan. 2010. SemEval-2010 Task 14: Word Sense Induction & Disambiguation. In Proceedings of the fifth International Workshop on Semantic Evaluation (SemEval), ACL-10, pages 63–68, Uppsala, Sweden. Roberto Navigli. 2009. Word Sense Disambiguation: a Survey. ACM Computing Surveys, 41(2):1–69. Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of the fifth International Conference on Language Resources and Evaluation (LREC-06), pages 2216–2219, Genoa, Italy. 1484 Sebastian Pad´o and Mirella Lapata. 2007. Dependencybased construction of semantic space models. Computational Linguistics, 33(2):161–199. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 613–619, Edmonton, Alberta, Canada. Ted Pedersen. 2010. Duluth-WSI: SenseClusters Applied to the Sense Induction Task of SemEval-2. In Proceedings of the fifth International Workshop on Semantic Evaluations (SemEval-2010), pages 363–366, Uppsala, Sweden. Amruta Purandare and Ted Pedersen. 2004. Word Sense Discrimination by Clustering Contexts in Vector and Similarity Spaces. In Proceedings of the Conference on Computational Natural Language Learning (CoNLL), pages 41–48, Boston, MA. Andrew Rosenberg and Julia Hirschberg. 2007. Vmeasure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the Joint 2007 Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 410–420, Prague, Czech Republic. Hinrich Sch¨utze. 1998. Automatic Word Sense Discrimination. Computational Linguistics, 24(1):97–123. Kristina Toutanova and Christopher D. Manning. 2000. Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000), pages 63–70. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-Rich Part-ofSpeech Tagging with a Cyclic Dependency Network. In Proceedings of the Human Language Technology / North American Association of Computational Linguistics conference (HLT-NAACL-03, pages 252–259, Edmonton, Canada. Tim Van de Cruys. 2008. Using Three Way Data for Word Sense Discrimination. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING-08), pages 929–936, Manchester, UK. Jean V´eronis. 2004. Hyperlex: lexical cartography for information retrieval. Computer Speech & Language, 18(3):223–252. Dominic Widdows and Beate Dorow. 2002. A Graph Model for Unsupervised Lexical Acquisition. In Proceedings of the 19th International Conference on Computational Linguistics (COLING-02), pages 1093– 1099, Taipei, Taiwan. 1485
2011
148
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1486–1495, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Confidence Driven Unsupervised Semantic Parsing Dan Goldwasser ∗ Roi Reichart † James Clarke ∗ Dan Roth ∗ ∗Department of Computer Science, University of Illinois at Urbana-Champaign {goldwas1,clarkeje,danr}@illinois.edu † Computer Science and Artificial Intelligence Laboratory, MIT [email protected] Abstract Current approaches for semantic parsing take a supervised approach requiring a considerable amount of training data which is expensive and difficult to obtain. This supervision bottleneck is one of the major difficulties in scaling up semantic parsing. We argue that a semantic parser can be trained effectively without annotated data, and introduce an unsupervised learning algorithm. The algorithm takes a self training approach driven by confidence estimation. Evaluated over Geoquery, a standard dataset for this task, our system achieved 66% accuracy, compared to 80% of its fully supervised counterpart, demonstrating the promise of unsupervised approaches for this task. 1 Introduction Semantic parsing, the ability to transform Natural Language (NL) input into a formal Meaning Representation (MR), is one of the longest standing goals of natural language processing. The importance of the problem stems from both theoretical and practical reasons, as the ability to convert NL into a formal MR has countless applications. The term semantic parsing has been used ambiguously to refer to several semantic tasks (e.g., semantic role labeling). We follow the most common definition of this task: finding a mapping between NL input and its interpretation expressed in a welldefined formal MR language. Unlike shallow semantic analysis tasks, the output of a semantic parser is complete and unambiguous to the extent it can be understood or even executed by a computer system. Current approaches for this task take a data driven approach (Zettlemoyer and Collins, 2007; Wong and Mooney, 2007), in which the learning algorithm is given a set of NL sentences as input and their corresponding MR, and learns a statistical semantic parser — a set of parameterized rules mapping lexical items and syntactic patterns to their MR. Given a sentence, these rules are applied recursively to derive the most probable interpretation. Since semantic interpretation is limited to the syntactic patterns observed in the training data, in order to work well these approaches require considerable amounts of annotated data. Unfortunately annotating sentences with their MR is a time consuming task which requires specialized domain knowledge and therefore minimizing the supervision effort is one of the key challenges in scaling semantic parsers. In this work we present the first unsupervised approach for this task. Our model compensates for the lack of training data by employing a self training protocol based on identifying high confidence self labeled examples and using them to retrain the model. We base our approach on a simple observation: semantic parsing is a difficult structured prediction task, which requires learning a complex model, however identifying good predictions can be done with a far simpler model capturing repeating patterns in the predicted data. We present several simple, yet highly effective confidence measures capturing such patterns, and show how to use them to train a semantic parser without manually annotated sentences. Our basic premise, that predictions with high confidence score are of high quality, is further used to improve the performance of the unsupervised train1486 ing procedure. Our learning algorithm takes an EMlike iterative approach, in which the predictions of the previous stage are used to bias the model. While this basic scheme was successfully applied to many unsupervised tasks, it is known to converge to a sub optimal point. We show that by using confidence estimation as a proxy for the model’s prediction quality, the learning algorithm can identify a better model compared to the default convergence criterion. We evaluate our learning approach and model on the well studied Geoquery domain (Zelle and Mooney, 1996; Tang and Mooney, 2001), consisting of natural language questions and their prolog interpretations used to query a database consisting of U.S. geographical information. Our experimental results show that using our approach we are able to train a good semantic parser without annotated data, and that using a confidence score to identify good models results in a significant performance improvement. 2 Semantic Parsing We formulate semantic parsing as a structured prediction problem, mapping a NL input sentence (denoted x), to its highest ranking MR (denoted z). In order to correctly parametrize and weight the possible outputs, the decision relies on an intermediate representation: an alignment between textual fragments and their meaning representation (denoted y). Fig. 1 describes a concrete example of this terminology. In our experiments the input sentences x are natural language queries about U.S. geography taken from the Geoquery dataset. The meaning representation z is a formal language database query, this output representation language is described in Sec. 2.1. The prediction function, mapping a sentence to its corresponding MR, is formalized as follows: ˆz = Fw(x) = arg max y∈Y,z∈Z wT Φ(x, y, z) (1) Where Φ is a feature function defined over an input sentence x, alignment y and output z. The weight vector w contains the model’s parameters, whose values are determined by the learning process. We refer to the arg max above as the inference problem. Given an input sentence, solving this inHow many states does the Colorado river run through? count( state( traverse( river( const(colorado)))) x z y Figure 1: Example of an input sentence (x), meaning representation (z) and the alignment between the two (y) for the Geoquery domain ference problem based on Φ and w is what compromises our semantic parser. In practice the parsing decision is decomposed into smaller decisions (Sec. 2.2). Sec. 4 provides more details about the feature representation and inference procedure used. Current approaches obtain w using annotated data, typically consisting of (x, z) pairs. In Sec. 3 we describe our unsupervised learning procedure, that is how to obtain w without annotated data. 2.1 Target Meaning Representation The output of the semantic parser is a logical formula, grounding the semantics of the input sentence in the domain language (i.e., the Geoquery domain). We use a subset of first order logic consisting of typed constants (corresponding to specific states, etc.) and functions, which capture relations between domains entities and properties of entities (e.g., population : E →N). The semantics of the input sentence is constructed via functional composition, done by the substitution operator. For example, given the function next to(x) and the expression const(texas), substitution replaces the occurrence of the free variable x with the expression, resulting in a new formula: next to(const(texas)). For further details we refer the reader to (Zelle and Mooney, 1996). 2.2 Semantic Parsing Decisions The inference problem described in Eq. 1 selects the top ranking output formula. In practice this decision is decomposed into smaller decisions, capturing local mapping of input tokens to logical fragments and their composition into larger fragments. These decisions are further decomposed into a feature representation, described in Sec. 4. The first type of decisions are encoded directly by the alignment (y) between the input tokens and their corresponding predicates. We refer to these as first 1487 order decisions. The pairs connected by the alignment (y) in Fig. 1 are examples of such decisions. The final output structure z is constructed by composing individual predicates into a complete formula. For example, consider the formula presented in Fig. 1: river( const(colorado)) is a composition of two predicates river and const(colorado). We refer to the composition of two predicates, associated with their respective input tokens, as second order decisions. In order to formulate these decisions, we introduce the following notation. c is a constituent in the input sentence x and D is the set of all function and constant symbols in the domain. The alignment y is a set of mappings between constituents and symbols in the domain y = {(c, s)} where s ∈D. We denote by si the i-th output predicate composition in z, by si−1(si) the composition of the (i−1)th predicate on the i-th predicate and by y(si) the input word corresponding to that predicate according to the alignment y. 3 Unsupervised Semantic Parsing Our learning framework takes a self training approach in which the learner is iteratively trained over its own predictions. Successful application of this approach depends heavily on two important factors - how to select high quality examples to train the model on, and how to define the learning objective so that learning can halt once a good model is found. Both of these questions are trivially answered when working in a supervised setting: by using the labeled data for training the model, and defining the learning objective with respect to the annotated data (for example, loss-minimization in the supervised version of our system). In this work we suggest to address both of the above concerns by approximating the quality of the model’s predictions using a confidence measure computed over the statistics of the self generated predictions. Output structures which fall close to the center of mass of these statistics will receive a high confidence score. The first issue is addressed by using examples assigned a high confidence score to train the model, acting as labeled examples. We also note that since the confidence score provides a good indication for the model’s prediction performance, it can be used to approximate the overall model performance, by observing the model’s total confidence score over all its predictions. This allows us to set a performance driven goal for our learning process - return the model maximizing the confidence score over all predictions. We describe the details of integrating the confidence score into the learning framework in Sec. 3.1. Although using the model’s prediction score (i.e., wT Φ(x, y, z)) as an indication of correctness is a natural choice, we argue and show empirically, that unsupervised learning driven by confidence estimation results in a better performing model. This empirical behavior also has theoretical justification: training the model using examples selected according to the model’s parameters (i.e., the top ranking structures) may not generalize much further beyond the existing model, as the training examples will simply reinforce the existing model. The statistics used for confidence estimation are different than those used by the model to create the output structures, and can therefore capture additional information unobserved by the prediction model. This assumption is based on the well established idea of multi-view learning, applied successfully to many NL applications (Blum and Mitchell, 1998; Collins and Singer, 1999). According to this idea if two models use different views of the data, each of them can enhance the learning process of the other. The success of our learning procedure hinges on finding good confidence measures, whose confidence prediction correlates well with the true quality of the prediction. The ability of unsupervised confidence estimation to provide high quality confidence predictions can be explained by the observation that prominent prediction patterns are more likely to be correct. If a non-random model produces a prediction pattern multiple times it is likely to be an indication of an underlying phenomenon in the data, and therefore more likely to be correct. Our specific choice of confidence measures is guided by the intuition that unlike structure prediction (i.e., solving the inference problem) which requires taking statistics over complex and intricate patterns, identifying high quality predictions can be done using much simpler patterns that are significantly easier to capture. In the reminder of this section we describe our 1488 Algorithm 1 Unsupervised Confidence driven Learning Input: Sentences {xl}N l=1, initial weight vector w 1: define Confidence : X × Y × Z →R, i = 0, Si = ∅ 2: repeat 3: for l = 1, . . . , N do 4: ˆy, ˆz = arg maxy,z wT Φ(xl, y, z) 5: Si = Si ∪{xl, ˆy, ˆz} 6: end for 7: Confidence = compute confidence statistics 8: Sconf i = select from Si using Confidence 9: wi ←Learn(∪iSconf i ) 10: i = i + 1 11: until Sconf i has no new unique examples 12: best = arg maxi(P s∈Si Confidence(s))/|S| 13: return wbest learning approach. We begin by introducing the overall learning framework (Sec. 3.1), we then explain the rational behind confidence estimation over self-generated data and introduce the confidence measures used in our experiments (Sec. 3.2). We conclude with a description of the specific learning algorithms used for updating the model (Sec. 3.3). 3.1 Unsupervised Confidence-Driven Learning Our learning framework works in an EM-like manner, iterating between two stages: making predictions based on its current set of parameters and then retraining the model using a subset of the predictions, assigned high confidence. The learning process “discovers” new high confidence training examples to add to its training set over multiple iterations, and converges when the model no longer adds new training examples. While this is a natural convergence criterion, it provides no performance guarantees, and in practice it is very likely that the quality of the model (i.e., its performance) fluctuates during the learning process. We follow the observation that confidence estimation can be used to approximate the performance of the entire model and return the model with the highest overall prediction confidence. We describe this algorithmic framework in detail in Alg. 1. Our algorithm takes as input a set of natural language sentences and a set of parameters used for making the initial predictions1. The algorithm then iterates between the two stages - predicting the output structure for each sentence (line 4), and updating the set of parameters (line 9). The specific learning algorithms used are discussed in Sec. 3.3. The training examples required for learning are obtained by selecting high confidence examples - the algorithm first takes statistics over the current predicted set of output structures (line 7), and then based on these statistics computes a confidence score for each structure, selecting the top ranked ones as positive training examples, and if needed, the bottom ones as negative examples (line 8). The set of top confidence examples (for either correct or incorrect prediction), at iteration i of the algorithm, is denoted Sconf i . The exact nature of the confidence computation is discussed in Sec. 3.2. The algorithm iterates between these two stages, at each iteration it adds more self-annotated examples to its training set, learning therefore converges when no new examples are added (line 11). The algorithm keeps track of the models it trained at each stage throughout this process, and returns the one with the highest averaged overall confidence score (lines 12-13). At each stage, the overall confidence score is computed by averaging over all the confidence scores of the predictions made at that stage. 3.2 Unsupervised Confidence Estimation Confidence estimation is calculated over a batch of input (x) - output (z) pairs. Each pair decomposes into smaller first order and second order decisions (defined Sec. 2.2). Confidence estimation is done by computing the statistics of these decisions, over the entire set of predicted structures. In the rest of this section we introduce the confidence measures used by our system. Translation Model The first approach essentially constructs a simplified translation model, capturing word-to-predicate mapping patterns. This can be considered as an abstraction of the prediction model: we collapse the intricate feature representation into 1Since we commit to the max-score output prediction, rather than summing over all possibilities, we require a reasonable initialization point. We initialized the weight vector using simple, straight-forward heuristics described in Sec. 5. 1489 high level decisions and take statistics over these decisions. Since it takes statistics over considerably less variables than the actual prediction model, we expect this model to make reliable confidence predictions. We consider two variations of this approach, the first constructs a unigram model over the first order decisions and the second a bigram model over the second order decisions. Formally, given a set of predicted structures we define the following confidence scores: Unigram Score: p(z|x) = |z| Y i=1 p(si|y(si)) Bigram Score: p(z|x) = |z| Y i=1 p(si−1(si)|y(si−1), y(si)) Structural Proportion Unlike the first approach which decomposes the predicted structure into individual decisions, this approach approximates the model’s performance by observing global properties of the structure. We take statistics over the proportion between the number of predicates in z and the number of words in x. Given a set of structure predictions S, we compute this proportion for each structure (denoted as Prop(x, z)) and calculate the average proportion over the entire set (denoted as AvProp(S)). The confidence score assigned to a given structure (x, y) is simply the difference between its proportion and the averaged proportion, or formally PropScore(S, (x, z)) = AvProp(S)−Prop(x, z) This measure captures the global complexity of the predicted structure and penalizes structures which are too complex (high negative values) or too simplistic (high positive values). Combined The two approaches defined above capture different views of the data, a natural question is then - can these two measures be combined to provide a more powerful estimation? We suggest a third approach which combines the first two approaches. It first uses the score produced by the latter approach to filter out unlikely candidates, and then ranks the remaining ones with the former approach and selects those with the highest rank. 3.3 Learning Algorithms Given a set of self generated structures, the parameter vector can be updated (line 9 in Alg. 1). We consider two learning algorithm for this purpose. The first is a binary learning algorithm, which considers learning as a classification problem, that is finding a set of weights w that can best separate correct from incorrect structures. The algorithm decomposes each predicted formula and its corresponding input sentence into a feature vector Φ(x, y, z) normalized by the size of the input sentence |x|, and assigns a binary label to this vector2. The learning process is defined over both positive and negative training examples. To accommodate that we modify line 8 in Alg. 1, and use the confidence score to select the top ranking examples as positive examples, and the bottom ranking examples as negative examples. We use a linear kernel SVM with squared-hinge loss as the underlying learning algorithm. The second is a structured learning algorithm which considers learning as a ranking problem, i.e., finding a set of weights w such that the “gold structure” will be ranked on top, preferably by a large margin to allow generalization.The structured learning algorithm can directly use the top ranking predictions of the model (line 8 in Alg. 1) as training data. In this case the underlying algorithm is a structural SVM with squared-hinge loss, using hamming distance as the distance function. We use the cuttingplane method to efficiently optimize the learning process’ objective function. 4 Model Semantic parsing as formulated in Eq. 1 is an inference procedure selecting the top ranked output logical formula. We follow the inference approach in (Roth and Yih, 2007; Clarke et al., 2010) and formalize this process as an Integer Linear Program (ILP). Due to space consideration we provide a brief description, and refer the reader to that paper for more details. 2Without normalization longer sentences would have more influence on binary learning problem. Normalization is therefore required to ensure that each sentence contributes equally to the binary learning problem regardless of its length. 1490 4.1 Inference The inference decision (Eq. 1) is decomposed into smaller decisions, capturing mapping of input tokens to logical fragments (first order) and their composition into larger fragments (second order). We encode a first-order decision as αcs, a binary variable indicating that constituent c is aligned with the logical symbol s. A second-order decision βcs,dt, is encoded as a binary variable indicating that the symbol t (associated with constituent d) is an argument of a function s (associated with constituent c). We frame the inference problem over these decisions: Fw(x) = arg max α,β X c∈x X s∈D αcs · wT Φ1(x, c, s) + X c,d∈x X s,t∈D βcs,dt · wT Φ2(x, c, s, d, t) (2) We restrict the possible assignments to the decision variables, forcing the resulting output formula to be syntactically legal, for example by restricting active β-variables to be type consistent, and force the resulting functional composition to be acyclic. We take advantage of the flexible ILP framework, and encode these restrictions as global constraints over Eq. 2. We refer the reader to (Clarke et al., 2010) for a full description of the constraints used. 4.2 Features The inference problem defined in Eq. (2) uses two feature functions: Φ1 and Φ2. First-order decision features Φ1 Determining if a logical symbol is aligned with a specific constituent depends mostly on lexical information. Following previous work (e.g., (Zettlemoyer and Collins, 2005)) we create a small lexicon, mapping logical symbols to surface forms.3 Existing approaches rely on annotated data to extend the lexicon. Instead we rely on external knowledge (Miller et al., 1990) and add features which measure the lexical similarity between a constituent and a logical symbol’s surface forms (as defined by the lexicon). 3The lexicon contains on average 1.42 words per function and 1.07 words per constant. Model Description INITIAL MODEL Manually set weights (Sec. 5.1) PRED. SCORE normalized prediction (Sec. 5.1) ALL EXAMPLES All top structures (Sec. 5.1) UNIGRAM Unigram score (Sec. 3.2) BIGRAM Bigram score (Sec. 3.2) PROPORTION Words-predicate prop (Sec. 3.2) COMBINED Combined estimators (Sec. 3.2) RESPONSE BASED Supervised (binary) (Sec. 5.1) SUPERVISED Fully Supervised (Sec. 5.1) Table 1: Compared systems and naming conventions. Second-order decision features Φ2 Second order decisions rely on syntactic information. We use the dependency tree of the input sentence. Given a second-order decision βcs,dt, the dependency feature takes the normalized distance between the head words in the constituents c and d. In addition, a set of features indicate which logical symbols are usually composed together, without considering their alignment to the text. 5 Experiments In this section we describe our experimental evaluation. We compare several confidence measures and analyze their properties. Tab. 1 defines the naming conventions used throughout this section to refer to the different models we evaluated. We begin by describing our experimental setup and then proceed to describe the experiments and their results. For the sake of clarity we focus on the best performing models (COMBINED using BIGRAM and PROPORTION) first and discuss other models later in the section. 5.1 Experimental Settings In all our experiments we used the Geoquery dataset (Zelle and Mooney, 1996), consisting of U.S. geography NL questions and their corresponding Prolog logical MR. We used the data split described in (Clarke et al., 2010), consisting of 250 queries for evaluation purposes. We compared our system to several supervised models, which were trained using a disjoint set of queries. Our learning system had access only to the NL questions, and the logical forms were only used to evaluate the system’s performance. We report the proportion of correct structures (accuracy). Note that this evaluation cor1491 responds to the 0/1 loss over the predicted structures. Initialization Our learning framework requires an initial weight vector as input. We use a straight forward heuristic and provide uniform positive weights to three features. This approach is similar in spirit to previous works (Clarke et al., 2010; Zettlemoyer and Collins, 2007). We refer to this system as INITIAL MODEL throughout this section. Competing Systems We compared our system to several other systems: (1) PRED. SCORE: An unsupervised framework using the model’s internal prediction score (wT Φ(x, y, z)) for confidence estimation. (2) ALL EXAMPLES: Treating all predicted structures as correct, i.e., at each iteration the model is trained over all the predictions it made. The reported score was obtained by selecting the model at the training iteration with the highest overall confidence score (see line 12 in Alg. 1). (3) RESPONSE BASED: A natural upper bound to our framework is the approach used in (Clarke et al., 2010). While our approach is based on assessing the correctness os the model’s predictions according to unsupervised confidence estimation, their framework is provided with external supervision for these decisions, indicating if the predicted structures are correct. (4) SUPERVISED: A fully supervised framework trained over 250 (x, z) pairs using structured SVM. 5.2 Results Our experiments aim to clarify three key points: (1) Can a semantic parser indeed be trained without any form of external supervision? this is our key question, as this is the first attempt to approach this task with an unsupervised learning protocol.4 In order to answer it, we report the overall performance of our system in Tab. 2. The manually constructed model INITIALMODEL achieves a performance of 0.22. We can expect learning to improve on this baseline. We compare three self-trained systems, ALL EXAMPLES, PREDICTIONSCORE and COMBINED, which differ 4While unsupervised learning for various semantic tasks has been widely discussed, this is the first attempt to tackle this task. We refer the reader to Sec. 6 for further discussion of this point. in their sample selection strategy, but all use confidence estimation for selecting the final semantic parsing model. The ALL EXAMPLES approach achieves an accuracy score of 0.656. PREDICTIONSCORE only achieves a performance of 0.164 using the binary learning algorithm and 0.348 using the structured learning algorithm. Finally, our confidence-driven technique COMBINED achieved a score of 0.536 for the binary case and 0.664 for the structured case, the best performing models in both cases. As expected, the supervised systems RESPONSE BASED and SUPERVISED achieve the best performance. These results show that training the model with training examples selected carefully will improve learning - as the best performance is achieved with perfect knowledge of the predictions correctness (RESPONSE BASED). Interestingly the difference between the structured version of our system and that of RESPONSE BASED is only 0.07, suggesting that we can recover the binary feedback signal with high precision. The low performance of the PREDICTIONSCORE model is also not surprising, and it demonstrates one of the key principles in confidence estimation - the score should be comparable across predictions done over different inputs, and not the same input, as done in PREDICTIONSCORE model. (2) How does confidence driven sample selection contribute to the learning process? Comparing the systems driven by confidence sample-selection to the ALL EXAMPLES approach uncovers an interesting tradeoff between training with more (noisy) data and selectively training the system with higher quality examples. We argue that carefully selecting high quality training examples will result in better performance. The empirical results indeed support our argument, as the best performing model (RESPONSE BASED) is achieved by sample selection with perfect knowledge of prediction correctness. The confidence-based sample selection system (COMBINED) is the best performing system out of all the self-trained systems. Nonetheless, the ALL EXAMPLES strategy performs well when compared to COMBINED, justifying a closer look at that aspect of our system. We argue that different confidence measures capture different properties of the data, and hypothe1492 size that combining their scores will improve the resulting model. In Tab. 3 we compare the results of the COMBINED measure to the results of its individual components - PROPORTION and BIGRAM. We compare these results both when using the binary and structured learning algorithms. Results show that using the COMBINED measure leads to an improved performance, better than any of the individual measures, suggesting that it can effectively exploit the properties of each confidence measure. Furthermore, COMBINED is the only sample selection strategy that outperforms ALL EXAMPLES. (3) Can confidence measures serve as a good proxy for the model’s performance? In the unsupervised settings we study the learning process may not converge to an optimal model. We argue that by selecting the model that maximizes the averaged confidence score, a better model can be found. We validate this claim empirically in Tab. 4. We compare the performance of the model selected using the confidence score to the performance of the final model considered by the learning algorithm (see Sec. 3.1 for details). We also compare it to the best model achieved in any of the learning iterations. Since these experiments required running the learning algorithm many times, we focused on the binary learning algorithm as it converges considerably faster. In order to focus the evaluation on the effects of learning, we ignore the initial model generated manually (INITIAL MODEL) in these experiments. In order to compare models performance across the different iterations fairly, a uniform scale, such as UNIGRAM and BIGRAM, is required. In the case of the COMBINED measure we used the BIGRAM measure for performance estimation, since it is one of its underlying components. In the PRED. SCORE and PROPORTION models we used both their confidence prediction, and the simple UNIGRAM confidence score to evaluate model performance (the latter appear in parentheses in Tab. 4). Results show that the over overall confidence score serves as a reliable proxy for the model performance - using UNIGRAM and BIGRAM the framework can select the best performing model, far better than the performance of the default model to which the system converged. Algorithm Supervision Acc. INITIAL MODEL — 0.222 SELF-TRAIN: (Structured) PRED. SCORE — 0.348 ALL EXAMPLES — 0.656 COMBINED — 0.664 SELF-TRAIN: (Binary) PRED. SCORE — 0.164 COMBINED — 0.536 RESPONSE BASED BINARY 250 (binary) 0.692 STRUCTURED 250 (binary) 0.732 SUPERVISED STRUCTURED 250 (struct.) 0.804 Table 2: Comparing our Self-trained systems with Response-based and supervised models. Results show that our COMBINED approach outperforms all other unsupervised models. Algorithm Accuracy SELF-TRAIN: (Structured) PROPORTION 0.6 BIGRAM 0.644 COMBINED 0.664 SELF-TRAIN: (Binary) BIGRAM 0.532 PROPORTION 0.504 COMBINED 0.536 Table 3: Comparing COMBINED to its components BIGRAM and PROPORTION. COMBINED results in a better score than any of its components, suggesting that it can exploit the properties of each measure effectively. Algorithm Best Conf. estim. Default PRED. SCORE 0.164 0.128 (0.164) 0.134 UNIGRAM 0.52 0.52 0.4 BIGRAM 0.532 0.532 0.472 PROPORTION 0.504 0.27 (0.504) 0.44 COMBINED 0.536 0.536 0.328 Table 4: Using confidence to approximate model performance. We compare the best result obtained in any of the learning algorithm iterations (Best), the result obtained by approximating the best result using the averaged prediction confidence (Conf. estim.) and the result of using the default convergence criterion (Default). Results in parentheses are the result of using the UNIGRAM confidence to approximate the model’s performance. 1493 6 Related Work Semantic parsing has attracted considerable interest in recent years. Current approaches employ various machine learning techniques for this task, such as Inductive Logic Programming in earlier systems (Zelle and Mooney, 1996; Tang and Mooney, 2000) and statistical learning methods in modern ones (Ge and Mooney, 2005; Nguyen et al., 2006; Wong and Mooney, 2006; Kate and Mooney, 2006; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Zettlemoyer and Collins, 2009). The difficulty of providing the required supervision motivated learning approaches using weaker forms of supervision. (Chen and Mooney, 2008; Liang et al., 2009; Branavan et al., 2009; Titov and Kozhevnikov, 2010) ground NL in an external world state directly referenced by the text. The NL input in our setting is not restricted to such grounded settings and therefore we cannot exploit this form of supervision. Recent work (Clarke et al., 2010; Liang et al., 2011) suggest using response-based learning protocols, which alleviate some of the supervision effort. This work takes an additional step in this direction and suggest an unsupervised protocol. Other approaches to unsupervised semantic analysis (Poon and Domingos, 2009; Titov and Klementiev, 2011) take a different approach to semantic representation, by clustering semantically equivalent dependency tree fragments, and identifying their predicate-argument structure. While these approaches have been applied successfully to semantic tasks such as question answering, they do not ground the input in a well defined output language, an essential component in our task. Our unsupervised approach follows a self training protocol (Yarowsky, 1995; McClosky et al., 2006; Reichart and Rappoport, 2007b) enhanced with constraints restricting the output space (Chang et al., 2007; Chang et al., 2009). A Self training protocol uses its own predictions for training. We estimate the quality of the predictions and use only high confidence examples for training. This selection criterion provides an additional view, different than the one used by the prediction model. Multi-view learning is a well established idea, implemented in methods such as co-training (Blum and Mitchell, 1998). Quality assessment of a learned model output was explored by many previous works (see (Caruana and Niculescu-Mizil, 2006) for a survey), and applied to several NL processing tasks such as syntactic parsing (Reichart and Rappoport, 2007a; Yates et al., 2006), machine translation (Ueffing and Ney, 2007), speech (Koo et al., 2001), relation extraction (Rosenfeld and Feldman, 2007), IE (Culotta and McCallum, 2004), QA (Chu-Carroll et al., 2003) and dialog systems (Lin and Weng, 2008). In addition to sample selection we use confidence estimation as a way to approximate the overall quality of the model and use it for model selection. This use of confidence estimation was explored in (Reichart et al., 2010), to select between models trained with different random starting points. In this work we integrate this estimation deeper into the learning process, thus allowing our training procedure to return the best performing model. 7 Conclusions We introduced an unsupervised learning algorithm for semantic parsing, the first for this task to the best of our knowledge. To compensate for the lack of training data we use a self-training protocol, driven by unsupervised confidence estimation. We demonstrate empirically that our approach results in a high preforming semantic parser and show that confidence estimation plays a vital role in this success, both by identifying good training examples as well as identifying good over all performance, used to improve the final model selection. In future work we hope to further improve unsupervised semantic parsing performance. Particularly, we intend to explore new approaches for confidence estimation and their usage in the unsupervised and semi-supervised versions of the task. Acknowledgments We thank the anonymous reviewers for their helpful feedback. This material is based upon work supported by DARPA under the Bootstrap Learning Program and Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, or the US government. 1494 References A. Blum and T. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT. S.R.K. Branavan, H. Chen, L. Zettlemoyer, and R. Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In ACL. R. Caruana and A. Niculescu-Mizil. 2006. An empirical comparison of supervised l earning algorithms. In ICML. M. Chang, L. Ratinov, and D. Roth. 2007. Guiding semisupervision with constraint-driven learning. In Proc. of the Annual Meeting of the ACL. M. Chang, D. Goldwasser, D. Roth, and Y. Tu. 2009. Unsupervised constraint driven learning for transliteration discovery. In NAACL. D. Chen and R. Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In ICML. J. Chu-Carroll, J. Prager K. Czuba, and A. Ittycheriah. 2003. In question answering, two heads are better than on. In HLT-NAACL. J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world’s response. In CoNLL, 7. M. Collins and Y. Singer. 1999. Unsupervised models for named entity classification. In EMNLP–VLC. A. Culotta and A. McCallum. 2004. Confidence estimation for information extraction. In HLT-NAACL. R. Ge and R. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In CoNLL. R. Kate and R. Mooney. 2006. Using string-kernels for learning semantic parsers. In ACL. Y. Koo, C. Lee, and B. Juang. 2001. Speech recognition and utterance verification based on a generalized confidence score. IEEE Transactions on Speech and Audio Processing, 9(8):821–832. P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In ACL. P. Liang, M.I. Jordan, and D. Klein. 2011. Deep compositional semantics from shallow supervision. In ACL. F. Lin and F. Weng. 2008. Computing confidence scores for all sub parse trees. In ACL. D. McClosky, E. Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In HLT-NAACL. G. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K.J. Miller. 1990. Wordnet: An on-line lexical database. International Journal of Lexicography. L. Nguyen, A. Shimazu, and X. Phan. 2006. Semantic parsing with structured svm ensemble classification models. In ACL. H. Poon and P. Domingos. 2009. Unsupervised semantic parsing. In EMNLP. R. Reichart and A. Rappoport. 2007a. An ensemble method for selection of high quality parses. In ACL. R. Reichart and A. Rappoport. 2007b. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In ACL. R. Reichart, R. Fattal, and A. Rappoport. 2010. Improved unsupervised pos induction using intrinsic clustering quality and a zipfian constraint. In CoNLL. B. Rosenfeld and R. Feldman. 2007. Using corpus statistics on entities to improve semi–supervised relation extraction from the web. In ACL. D. Roth and W. Yih. 2007. Global inference for entity and relation identification via a linear programming formulation. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. L. Tang and R. Mooney. 2000. Automated construction of database interfaces: integrating statistical and relational learning for semantic parsing. In EMNLP. L. R. Tang and R. J. Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In ECML. I. Titov and A. Klementiev. 2011. A bayesian model for unsupervised semantic parsing. In ACL. I. Titov and M. Kozhevnikov. 2010. Bootstrapping semantic analyzers from non-contradictory texts. In ACL. N. Ueffing and H. Ney. 2007. Word-level confidence estimation for machine translation. Computational Linguistics, 33(1):9–40. Y.W. Wong and R. Mooney. 2006. Learning for semantic parsing with statistical machine translation. In NAACL. Y.W. Wong and R. Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In ACL. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised method. In ACL. A. Yates, S. Schoenmackers, and O. Etzioni. 2006. Detecting parser errors using web-based semantic filters. In EMNLP. J. M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic proramming. In AAAI. L. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI. L. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In CoNLL. L. Zettlemoyer and M. Collins. 2009. Learning contextdependent mappings from sentences to logical form. In ACL. 1495
2011
149
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 142–150, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Learning Word Vectors for Sentiment Analysis Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts Stanford University Stanford, CA 94305 [amaas, rdaly, ptpham, yuze, ang, cgpotts]@stanford.edu Abstract Unsupervised vector-based approaches to semantics can model rich lexical meanings, but they largely fail to capture sentiment information that is central to many word meanings and important for a wide range of NLP tasks. We present a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term–document information as well as rich sentiment content. The proposed model can leverage both continuous and multi-dimensional sentiment information as well as non-sentiment annotations. We instantiate the model to utilize the document-levelsentiment polarity annotations present in many online documents (e.g. star ratings). We evaluate the model using small, widely used sentiment and subjectivity corpora and find it out-performs several previously introduced methods for sentiment classification. We also introduce a large dataset of movie reviews to serve as a more robust benchmark for work in this area. 1 Introduction Word representations are a critical component of many natural language processing systems. It is common to represent words as indices in a vocabulary, but this fails to capture the rich relational structure of the lexicon. Vector-based models do much better in this regard. They encode continuous similarities between words as distance or angle between word vectors in a high-dimensional space. The general approach has proven useful in tasks such as word sense disambiguation, named entity recognition, part of speech tagging, and document retrieval (Turney and Pantel, 2010; Collobert and Weston, 2008; Turian et al., 2010). In this paper, we present a model to capture both semantic and sentiment similarities among words. The semantic component of our model learns word vectors via an unsupervised probabilistic model of documents. However, in keeping with linguistic and cognitive research arguing that expressive content and descriptive semantic content are distinct (Kaplan, 1999; Jay, 2000; Potts, 2007), we find that this basic model misses crucial sentiment information. For example, while it learns that wonderful and amazing are semantically close, it doesn’t capture the fact that these are both very strong positive sentiment words, at the opposite end of the spectrum from terrible and awful. Thus, we extend the model with a supervised sentiment component that is capable of embracing many social and attitudinal aspects of meaning (Wilson et al., 2004; Alm et al., 2005; Andreevskaia and Bergler, 2006; Pang and Lee, 2005; Goldberg and Zhu, 2006; Snyder and Barzilay, 2007). This component of the model uses the vector representation of words to predict the sentiment annotations on contexts in which the words appear. This causes words expressing similar sentiment to have similar vector representations. The full objective function of the model thus learns semantic vectors that are imbued with nuanced sentiment information. In our experiments, we show how the model can leverage document-level sentiment annotations of a sort that are abundant online in the form of consumer reviews for movies, products, etc. The technique is suffi142 ciently general to work also with continuous and multi-dimensional notions of sentiment as well as non-sentiment annotations (e.g., political affiliation, speaker commitment). After presenting the model in detail, we provide illustrative examples of the vectors it learns, and then we systematically evaluate the approach on document-level and sentence-level classification tasks. Our experiments involve the small, widely used sentiment and subjectivity corpora of Pang and Lee (2004), which permits us to make comparisons with a number of related approaches and published results. We also show that this dataset contains many correlations between examples in the training and testing sets. This leads us to evaluate on, and make publicly available, a large dataset of informal movie reviews from the Internet Movie Database (IMDB). 2 Related work The model we present in the next section draws inspiration from prior work on both probabilistic topic modeling and vector-spaced models for word meanings. Latent Dirichlet Allocation (LDA; (Blei et al., 2003)) is a probabilistic document model that assumes each document is a mixture of latent topics. For each latent topic T, the model learns a conditional distribution p(w|T) for the probability that word w occurs in T. One can obtain a kdimensional vector representation of words by first training a k-topic model and then filling the matrix with the p(w|T) values (normalized to unit length). The result is a word–topic matrix in which the rows are taken to represent word meanings. However, because the emphasis in LDA is on modeling topics, not word meanings, there is no guarantee that the row (word) vectors are sensible as points in a k-dimensional space. Indeed, we show in section 4 that using LDA in this way does not deliver robust word vectors. The semantic component of our model shares its probabilistic foundation with LDA, but is factored in a manner designed to discover word vectors rather than latent topics. Some recent work introduces extensions of LDA to capture sentiment in addition to topical information (Li et al., 2010; Lin and He, 2009; Boyd-Graber and Resnik, 2010). Like LDA, these methods focus on modeling sentiment-imbued topics rather than embedding words in a vector space. Vector space models (VSMs) seek to model words directly (Turney and Pantel, 2010). Latent Semantic Analysis (LSA), perhaps the best known VSM, explicitly learns semantic word vectors by applying singular value decomposition (SVD) to factor a term–document co-occurrence matrix. It is typical to weight and normalize the matrix values prior to SVD. To obtain a k-dimensional representation for a given word, only the entries corresponding to the k largest singular values are taken from the word’s basis in the factored matrix. Such matrix factorizationbased approaches are extremely successful in practice, but they force the researcher to make a number of design choices (weighting, normalization, dimensionality reduction algorithm) with little theoretical guidance to suggest which to prefer. Using term frequency (tf) and inverse document frequency (idf) weighting to transform the values in a VSM often increases the performance of retrieval and categorization systems. Delta idf weighting (Martineau and Finin, 2009) is a supervised variant of idf weighting in which the idf calculation is done for each document class and then one value is subtracted from the other. Martineau and Finin present evidence that this weighting helps with sentiment classification, and Paltoglou and Thelwall (2010) systematically explore a number of weighting schemes in the context of sentiment analysis. The success of delta idf weighting in previous work suggests that incorporating sentiment information into VSM values via supervised methods is helpful for sentiment analysis. We adopt this insight, but we are able to incorporate it directly into our model’s objective function. (Section 4 compares our approach with a representative sample of such weighting schemes.) 3 Our Model To capture semantic similarities among words, we derive a probabilistic model of documents which learns word representations. This component does not require labeled data, and shares its foundation with probabilistic topic models such as LDA. The sentiment component of our model uses sentiment annotations to constrain words expressing similar 143 sentiment to have similar representations. We can efficiently learn parameters for the joint objective function using alternating maximization. 3.1 Capturing Semantic Similarities We build a probabilistic model of a document using a continuous mixture distribution over words indexed by a multi-dimensional random variable θ. We assume words in a document are conditionally independent given the mixture variable θ. We assign a probability to a document d using a joint distribution over the document and θ. The model assumes each word wi ∈d is conditionally independent of the other words given θ. The probability of a document is thus p(d) = Z p(d, θ)dθ = Z p(θ) N Y i=1 p(wi|θ)dθ. (1) Where N is the number of words in d and wi is the ith word in d. We use a Gaussian prior on θ. We define the conditional distribution p(wi|θ) using a log-linear model with parameters R and b. The energy function uses a word representation matrix R ∈R(β x |V |) where each word w (represented as a one-on vector) in the vocabulary V has a βdimensional vector representation φw = Rw corresponding to that word’s column in R. The random variable θ is also a β-dimensional vector, θ ∈Rβ which weights each of the β dimensions of words’ representation vectors. We additionally introduce a bias bw for each word to capture differences in overall word frequencies. The energy assigned to a word w given these model parameters is E(w; θ, φw, bw) = −θTφw −bw. (2) To obtain the distribution p(w|θ) we use a softmax, p(w|θ; R, b) = exp(−E(w; θ, φw, bw)) P w′∈V exp(−E(w′; θ, φw′, bw′)) (3) = exp(θT φw + bw) P w′∈V exp(θTφw′ + bw′). (4) The number of terms in the denominator’s summation grows linearly in |V |, making exact computation of the distribution possible. For a given θ, a word w’s occurrence probability is related to how closely its representation vector φw matches the scaling direction of θ. This idea is similar to the word vector inner product used in the log-bilinear language model of Mnih and Hinton (2007). Equation 1 resembles the probabilistic model of LDA (Blei et al., 2003), which models documents as mixtures of latent topics. One could view the entries of a word vector φ as that word’s association strength with respect to each latent topic dimension. The random variable θ then defines a weighting over topics. However, our model does not attempt to model individual topics, but instead directly models word probabilities conditioned on the topic mixture variable θ. Because of the log-linear formulation of the conditional distribution, θ is a vector in Rβ and not restricted to the unit simplex as it is in LDA. We now derive maximum likelihood learning for this model when given a set of unlabeled documents D. In maximum likelihood learning we maximize the probability of the observed data given the model parameters. We assume documents dk ∈D are i.i.d. samples. Thus the learning problem becomes max R,b p(D; R, b) = Y dk∈D Z p(θ) Nk Y i=1 p(wi|θ; R, b)dθ. (5) Using maximum a posteriori (MAP) estimates for θ, we approximate this learning problem as max R,b Y dk∈D p(ˆθk) Nk Y i=1 p(wi|ˆθk; R, b), (6) where ˆθk denotes the MAP estimate of θ for dk. We introduce a Frobenious norm regularization term for the word representation matrix R. The word biases b are not regularized reflecting the fact that we want the biases to capture whatever overall word frequency statistics are present in the data. By taking the logarithm and simplifying we obtain the final objective, ν||R||2 F + X dk∈D λ||ˆθk||2 2 + Nk X i=1 log p(wi|ˆθk; R, b), (7) which is maximized with respect to R and b. The hyper-parameters in the model are the regularization 144 weights (λ and ν), and the word vector dimensionality β. 3.2 Capturing Word Sentiment The model presented so far does not explicitly capture sentiment information. Applying this algorithm to documents will produce representations where words that occur together in documents have similar representations. However, this unsupervised approach has no explicit way of capturing which words are predictive of sentiment as opposed to content-related. Much previous work in natural language processing achieves better representations by learning from multiple tasks (Collobert and Weston, 2008; Finkel and Manning, 2009). Following this theme we introduce a second task to utilize labeled documents to improve our model’s word representations. Sentiment is a complex, multi-dimensional concept. Depending on which aspects of sentiment we wish to capture, we can give some body of text a sentiment label s which can be categorical, continuous, or multi-dimensional. To leverage such labels, we introduce an objective that the word vectors of our model should predict the sentiment label using some appropriate predictor, ˆs = f(φw). (8) Using an appropriate predictor function f(x) we map a word vector φw to a predicted sentiment label ˆs. We can then improve our word vector φw to better predict the sentiment labels of contexts in which that word occurs. For simplicity we consider the case where the sentiment label s is a scalar continuous value representing sentiment polarity of a document. This captures the case of many online reviews where documents are associated with a label on a star rating scale. We linearly map such star values to the interval s ∈[0, 1] and treat them as a probability of positive sentiment polarity. Using this formulation, we employ a logistic regression as our predictor f(x). We use w’s vector representation φw and regression weights ψ to express this as p(s = 1|w; R, ψ) = σ(ψT φw + bc), (9) where σ(x) is the logistic function and ψ ∈Rβ is the logistic regression weight vector. We additionally introduce a scalar bias bc for the classifier. The logistic regression weights ψ and bc define a linear hyperplane in the word vector space where a word vector’s positive sentiment probability depends on where it lies with respect to this hyperplane. Learning over a collection of documents results in words residing different distances from this hyperplane based on the average polarity of documents in which the words occur. Given a set of labeled documents D where sk is the sentiment label for document dk, we wish to maximize the probability of document labels given the documents. We assume documents in the collection and words within a document are i.i.d. samples. By maximizing the log-objective we obtain, max R,ψ,bc |D| X k=1 Nk X i=1 log p(sk|wi; R, ψ, bc). (10) The conditional probability p(sk|wi; R, ψ, bc) is easily obtained from equation 9. 3.3 Learning The full learning objective maximizes a sum of the two objectives presented. This produces a final objective function of, ν||R||2 F + |D| X k=1 λ||ˆθk||2 2 + Nk X i=1 log p(wi|ˆθk; R, b) + |D| X k=1 1 |Sk| Nk X i=1 log p(sk|wi; R, ψ, bc). (11) |Sk| denotes the number of documents in the dataset with the same rounded value of sk (i.e. sk < 0.5 and sk ≥0.5). We introduce the weighting 1 |Sk| to combat the well-known imbalance in ratings present in review collections. This weighting prevents the overall distribution of document ratings from affecting the estimate of document ratings in which a particular word occurs. The hyper-parameters of the model are the regularization weights (λ and ν), and the word vector dimensionality β. Maximizing the objective function with respect to R, b, ψ, and bc is a non-convex problem. We use alternating maximization, which first optimizes the 145 word representations (R, b, ψ, and bc) while leaving the MAP estimates (ˆθ) fixed. Then we find the new MAP estimate for each document while leaving the word representations fixed, and continue this process until convergence. The optimization algorithm quickly finds a global solution for each ˆθk because we have a low-dimensional, convex problem in each ˆθk. Because the MAP estimation problems for different documents are independent, we can solve them on separate machines in parallel. This facilitates scaling the model to document collections with hundreds of thousands of documents. 4 Experiments We evaluate our model with document-level and sentence-level categorization tasks in the domain of online movie reviews. For document categorization, we compare our method to previously published results on a standard dataset, and introduce a new dataset for the task. In both tasks we compare our model’s word representations with several bag of words weighting methods, and alternative approaches to word vector induction. 4.1 Word Representation Learning We induce word representations with our model using 25,000 movie reviews from IMDB. Because some movies receive substantially more reviews than others, we limited ourselves to including at most 30 reviews from any movie in the collection. We build a fixed dictionary of the 5,000 most frequent tokens, but ignore the 50 most frequent terms from the original full vocabulary. Traditional stop word removal was not used because certain stop words (e.g. negating words) are indicative of sentiment. Stemming was not applied because the model learns similar representations for words of the same stem when the data suggests it. Additionally, because certain non-word tokens (e.g. “!” and “:-)” ) are indicative of sentiment, we allow them in our vocabulary. Ratings on IMDB are given as star values (∈{1, 2, ..., 10}), which we linearly map to [0, 1] to use as document labels when training our model. The semantic component of our model does not require document labels. We train a variant of our model which uses 50,000 unlabeled reviews in addition to the labeled set of 25,000 reviews. The unlabeled set of reviews contains neutral reviews as well as those which are polarized as found in the labeled set. Training the model with additional unlabeled data captures a common scenario where the amount of labeled data is small relative to the amount of unlabeled data available. For all word vector models, we use 50-dimensional vectors. As a qualitative assessment of word representations, we visualize the words most similar to a query word using vector similarity of the learned representations. Given a query word w and another word w′ we obtain their vector representations φw and φw′, and evaluate their cosine similarity as S(φw, φw′) = φT wφw′ ||φw||·||φw′||. By assessing the similarity of w with all other words w′, we can find the words deemed most similar by the model. Table 1 shows the most similar words to given query words using our model’s word representations as well as those of LSA. All of these vectors capture broad semantic similarities. However, both versions of our model seem to do better than LSA in avoiding accidental distributional similarities (e.g., screwball and grant as similar to romantic) A comparison of the two versions of our model also begins to highlight the importance of adding sentiment information. In general, words indicative of sentiment tend to have high similarity with words of the same sentiment polarity, so even the purely unsupervised model’s results look promising. However, they also show more genre and content effects. For example, the sentiment enriched vectors for ghastly are truly semantic alternatives to that word, whereas the vectors without sentiment also contain some content words that tend to have ghastly predicated of them. Of course, this is only an impressionistic analysis of a few cases, but it is helpful in understanding why the sentiment-enriched model proves superior at the sentiment classification results we report next. 4.2 Other Word Representations For comparison, we implemented several alternative vector space models that are conceptually similar to our own, as discussed in section 2: Latent Semantic Analysis (LSA; Deerwester et al., 1990) We apply truncated SVD to a tf.idf weighted, cosine normalized count matrix, which is a standard weighting and smoothing scheme for 146 Our model Our model Sentiment + Semantic Semantic only LSA melancholy bittersweet thoughtful poetic heartbreaking warmth lyrical happiness layer poetry tenderness gentle profound compassionate loneliness vivid ghastly embarrassingly predators hideous trite hideous inept laughably tube severely atrocious baffled grotesque appalling smack unsuspecting lackluster lame passable uninspired laughable unconvincing flat unimaginative amateurish bland uninspired clich´ed forgettable awful insipid mediocre romantic romance romance romance love charming screwball sweet delightful grant beautiful sweet comedies relationship chemistry comedy Table 1: Similarity of learned word vectors. Each target word is given with its five most similar words using cosine similarity of the vectors determined by each model. The full version of our model (left) captures both lexical similarity as well as similarity of sentiment strength and orientation. Our unsupervised semantic component (center) and LSA (right) capture semantic relations. VSM induction (Turney and Pantel, 2010). Latent Dirichlet Allocation (LDA; Blei et al., 2003) We use the method described in section 2 for inducing word representations from the topic matrix. To train the 50-topic LDA model we use code released by Blei et al. (2003). We use the same 5,000 term vocabulary for LDA as is used for training word vector models. We leave the LDA hyperparameters at their default values, though some work suggests optimizing over priors for LDA is important (Wallach et al., 2009). Weighting Variants We evaluate both binary (b) term frequency weighting with smoothed delta idf (∆t’) and no idf (n) because these variants worked well in previous experiments in sentiment (Martineau and Finin, 2009; Pang et al., 2002). In all cases, we use cosine normalization (c). Paltoglou and Thelwall (2010) perform an extensive analysis of such weighting variants for sentiment tasks. 4.3 Document Polarity Classification Our first evaluation task is document-level sentiment polarity classification. A classifier must predict whether a given review is positive or negative given the review text. Given a document’s bag of words vector v, we obtain features from our model using a matrixvector product Rv, where v can have arbitrary tf.idf weighting. We do not cosine normalize v, instead applying cosine normalization to the final feature vector Rv. This procedure is also used to obtain features from the LDA and LSA word vectors. In preliminary experiments, we found ‘bnn’ weighting to work best for v when generating document features via the product Rv. In all experiments, we use this weighting to get multi-word representations 147 Features PL04 Our Dataset Subjectivity Bag of Words (bnc) 85.45 87.80 87.77 Bag of Words (b∆t’c) 85.80 88.23 85.65 LDA 66.70 67.42 66.65 LSA 84.55 83.96 82.82 Our Semantic Only 87.10 87.30 86.65 Our Full 84.65 87.44 86.19 Our Full, Additional Unlabeled 87.05 87.99 87.22 Our Semantic + Bag of Words (bnc) 88.30 88.28 88.58 Our Full + Bag of Words (bnc) 87.85 88.33 88.45 Our Full, Add’l Unlabeled + Bag of Words (bnc) 88.90 88.89 88.13 Bag of Words SVM (Pang and Lee, 2004) 87.15 N/A 90.00 Contextual Valence Shifters (Kennedy and Inkpen, 2006) 86.20 N/A N/A tf.∆idf Weighting (Martineau and Finin, 2009) 88.10 N/A N/A Appraisal Taxonomy (Whitelaw et al., 2005) 90.20 N/A N/A Table 2: Classification accuracy on three tasks. From left to right the datasets are: A collection of 2,000 movie reviews often used as a benchmark of sentiment classification (Pang and Lee, 2004), 50,000 reviews we gathered from IMDB, and the sentence subjectivity dataset also released by (Pang and Lee, 2004). All tasks are balanced two-class problems. from word vectors. 4.3.1 Pang and Lee Movie Review Dataset The polarity dataset version 2.0 introduced by Pang and Lee (2004) 1 consists of 2,000 movie reviews, where each is associated with a binary sentiment polarity label. We report 10-fold cross validation results using the authors’ published folds to make our results comparable with others in the literature. We use a linear support vector machine (SVM) classifier trained with LIBLINEAR (Fan et al., 2008), and set the SVM regularization parameter to the same value used by Pang and Lee (2004). Table 2 shows the classification performance of our method, other VSMs we implemented, and previously reported results from the literature. Bag of words vectors are denoted by their weighting notation. Features from word vector learner are denoted by the learner name. As a control, we trained versions of our model with only the unsupervised semantic component, and the full model (semantic and sentiment). We also include results for a version of our full model trained with 50,000 additional unlabeled examples. Finally, to test whether our models’ representations complement a standard bag of words, we evaluate performance of the two feature representations concatenated. 1http://www.cs.cornell.edu/people/pabo/movie-review-data Our method’s features clearly outperform those of other VSMs, and perform best when combined with the original bag of words representation. The variant of our model trained with additional unlabeled data performed best, suggesting the model can effectively utilize large amounts of unlabeled data along with labeled examples. Our method performs competitively with previously reported results in spite of our restriction to a vocabulary of only 5,000 words. We extracted the movie title associated with each review and found that 1,299 of the 2,000 reviews in the dataset have at least one other review of the same movie in the dataset. Of 406 movies with multiple reviews, 249 have the same polarity label for all of their reviews. Overall, these facts suggest that, relative to the size of the dataset, there are highly correlated examples with correlated labels. This is a natural and expected property of this kind of document collection, but it can have a substantial impact on performance in datasets of this scale. In the random folds distributed by the authors, approximately 50% of reviews in each validation fold’s test set have a review of the same movie with the same label in the training set. Because the dataset is small, a learner may perform well by memorizing the association between label and words unique to a particular movie (e.g., character names or plot terms). We introduce a substantially larger dataset, which 148 uses disjoint sets of movies for training and testing. These steps minimize the ability of a learner to rely on idiosyncratic word–class associations, thereby focusing attention on genuine sentiment features. 4.3.2 IMDB Review Dataset We constructed a collection of 50,000 reviews from IMDB, allowing no more than 30 reviews per movie. The constructed dataset contains an even number of positive and negative reviews, so randomly guessing yields 50% accuracy. Following previous work on polarity classification, we consider only highly polarized reviews. A negative review has a score ≤4 out of 10, and a positive review has a score ≥7 out of 10. Neutral reviews are not included in the dataset. In the interest of providing a benchmark for future work in this area, we release this dataset to the public.2 We evenly divided the dataset into training and test sets. The training set is the same 25,000 labeled reviews used to induce word vectors with our model. We evaluate classifier performance after cross-validating classifier parameters on the training set, again using a linear SVM in all cases. Table 2 shows classification performance on our subset of IMDB reviews. Our model showed superior performance to other approaches, and performed best when concatenated with bag of words representation. Again the variant of our model which utilized extra unlabeled data during training performed best. Differences in accuracy are small, but, because our test set contains 25,000 examples, the variance of the performance estimate is quite low. For example, an accuracy increase of 0.1% corresponds to correctly classifying an additional 25 reviews. 4.4 Subjectivity Detection As a second evaluation task, we performed sentencelevel subjectivity classification. In this task, a classifier is trained to decide whether a given sentence is subjective, expressing the writer’s opinions, or objective, expressing purely facts. We used the dataset of Pang and Lee (2004), which contains subjective sentences from movie review summaries and objective sentences from movie plot summaries. This task 2Dataset and further details are available online at: http://www.andrew-maas.net/data/sentiment is substantially different from the review classification task because it uses sentences as opposed to entire documents and the target concept is subjectivity instead of opinion polarity. We randomly split the 10,000 examples into 10 folds and report 10-fold cross validation accuracy using the SVM training protocol of Pang and Lee (2004). Table 2 shows classification accuracies from the sentence subjectivity experiment. Our model again provided superior features when compared against other VSMs. Improvement over the bag-of-words baseline is obtained by concatenating the two feature vectors. 5 Discussion We presented a vector space model that learns word representations captuing semantic and sentiment information. The model’s probabilistic foundation gives a theoretically justified technique for word vector induction as an alternative to the overwhelming number of matrix factorization-based techniques commonly used. Our model is parametrized as a log-bilinear model following recent success in using similar techniques for language models (Bengio et al., 2003; Collobert and Weston, 2008; Mnih and Hinton, 2007), and it is related to probabilistic latent topic models (Blei et al., 2003; Steyvers and Griffiths, 2006). We parametrize the topical component of our model in a manner that aims to capture word representations instead of latent topics. In our experiments, our method performed better than LDA, which models latent topics directly. We extended the unsupervised model to incorporate sentiment information and showed how this extended model can leverage the abundance of sentiment-labeled texts available online to yield word representations that capture both sentiment and semantic relations. We demonstrated the utility of such representations on two tasks of sentiment classification, using existing datasets as well as a larger one that we release for future research. These tasks involve relatively simple sentiment information, but the model is highly flexible in this regard; it can be used to characterize a wide variety of annotations, and thus is broadly applicable in the growing areas of sentiment analysis and retrieval. 149 Acknowledgments This work is supported by the DARPA Deep Learning program under contract number FA8650-10-C7020, an NSF Graduate Fellowship awarded to AM, and ONR grant No. N00014-10-1-0109 to CP. References C. O. Alm, D. Roth, and R. Sproat. 2005. Emotions from text: machine learning for text-based emotion prediction. In Proceedings of HLT/EMNLP, pages 579–586. A. Andreevskaia and S. Bergler. 2006. Mining WordNet for fuzzy sentiment: sentiment tag extraction from WordNet glosses. In Proceedings of the European ACL, pages 209–216. Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. 2003. a neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, August. D. M. Blei, A. Y. Ng, and M. I. Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, May. J. Boyd-Graber and P. Resnik. 2010. Holistic sentiment analysis across languages: multilingual supervised latent Dirichlet allocation. In Proceedings of EMNLP, pages 45–55. R. Collobert and J. Weston. 2008. A unified architecture for natural language processing. In Proceedings of the ICML, pages 160–167. S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41:391–407, September. R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, and C. J. Lin. 2008. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874, August. J. R. Finkel and C. D. Manning. 2009. Joint parsing and named entity recognition. In Proceedings of NAACL, pages 326–334. A. B. Goldberg and J. Zhu. 2006. Seeing stars when there aren’t many stars: graph-based semi-supervised learning for sentiment categorization. In TextGraphs: HLT/NAACL Workshop on Graph-based Algorithms for Natural Language Processing, pages 45–52. T. Jay. 2000. Why We Curse: A Neuro-PsychoSocial Theory of Speech. John Benjamins, Philadelphia/Amsterdam. D. Kaplan. 1999. What is meaning? Explorations in the theory of Meaning as Use. Brief version — draft 1. Ms., UCLA. A. Kennedy and D. Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence, 22:110–125, May. F. Li, M. Huang, and X. Zhu. 2010. Sentiment analysis with global topics and local dependency. In Proceedings of AAAI, pages 1371–1376. C. Lin and Y. He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceeding of the 18th ACM Conference on Information and Knowledge Management, pages 375–384. J. Martineau and T. Finin. 2009. Delta tfidf: an improved feature space for sentiment analysis. In Proceedings of the 3rd AAAI International Conference on Weblogs and Social Media, pages 258–261. A. Mnih and G. E. Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of the ICML, pages 641–648. G. Paltoglou and M. Thelwall. 2010. A study of information retrieval weighting schemes for sentiment analysis. In Proceedings of the ACL, pages 1386–1395. B. Pang and L. Lee. 2004. A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the ACL, pages 271–278. B. Pang and L. Lee. 2005. Seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL, pages 115– 124. B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79–86. C. Potts. 2007. The expressive dimension. Theoretical Linguistics, 33:165–197. B. Snyder and R. Barzilay. 2007. Multiple aspect ranking using the good grief algorithm. In Proceedings of NAACL, pages 300–307. M. Steyvers and T. L. Griffiths. 2006. Probabilistic topic models. In T. Landauer, D McNamara, S. Dennis, and W. Kintsch, editors, Latent Semantic Analysis: A Road to Meaning. J. Turian, L. Ratinov, and Y. Bengio. 2010. Word representations: A simple and general method for semisupervised learning. In Proceedings of the ACL, page 384394. P. D. Turney and P. Pantel. 2010. From frequency to meaning: vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. H. Wallach, D. Mimno, and A. McCallum. 2009. Rethinking LDA: why priors matter. In Proceedings of NIPS, pages 1973–1981. C. Whitelaw, N. Garg, and S. Argamon. 2005. Using appraisal groups for sentiment analysis. In Proceedings of CIKM, pages 625–631. T. Wilson, J. Wiebe, and R. Hwa. 2004. Just how mad are you? Finding strong and weak opinion clauses. In Proceedings of AAAI, pages 761–769. 150
2011
15
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1496–1505, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews Jianxing Yu, Zheng-Jun Zha, Meng Wang, Tat-Seng Chua School of Computing National University of Singapore {jianxing, zhazj, wangm, chuats}@comp.nus.edu.sg Abstract In this paper, we dedicate to the topic of aspect ranking, which aims to automatically identify important product aspects from online consumer reviews. The important aspects are identified according to two observations: (a) the important aspects of a product are usually commented by a large number of consumers; and (b) consumers’ opinions on the important aspects greatly influence their overall opinions on the product. In particular, given consumer reviews of a product, we first identify the product aspects by a shallow dependency parser and determine consumers’ opinions on these aspects via a sentiment classifier. We then develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. The experimental results on 11 popular products in four domains demonstrate the effectiveness of our approach. We further apply the aspect ranking results to the application of documentlevel sentiment classification, and improve the performance significantly. 1 Introduction The rapidly expanding e-commerce has facilitated consumers to purchase products online. More than $156 million online product retail sales have been done in the US market during 2009 (Forrester Research, 2009). Most retail Web sites encourage consumers to write reviews to express their opinions on various aspects of the products. This gives rise to Figure 1: Sample reviews on iPhone 3GS product huge collections of consumer reviews on the Web. These reviews have become an important resource for both consumers and firms. Consumers commonly seek quality information from online consumer reviews prior to purchasing a product, while many firms use online consumer reviews as an important resource in their product development, marketing, and consumer relationship management. As illustrated in Figure 1, most online reviews express consumers’ overall opinion ratings on the product, and their opinions on multiple aspects of the product. While a product may have hundreds of aspects, we argue that some aspects are more important than the others and have greater influence on consumers’ purchase decisions as well as firms’ product development strategies. Take iPhone 3GS as an example, some aspects like “battery” and “speed,” are more important than the others like “moisture sensor.” Generally, identifying the important product aspects will benefit both consumers and firms. Consumers can conveniently make wise purchase decision by paying attentions on the important aspects, while firms can focus on improving the quality of 1496 these aspects and thus enhance the product reputation effectively. However, it is impractical for people to identify the important aspects from the numerous reviews manually. Thus, it becomes a compelling need to automatically identify the important aspects from consumer reviews. A straightforward solution for important aspect identification is to select the aspects that are frequently commented in consumer reviews as the important ones. However, consumers’ opinions on the frequent aspects may not influence their overall opinions on the product, and thus not influence consumers’ purchase decisions. For example, most consumers frequently criticize the bad “signal connection” of iPhone 4, but they may still give high overall ratings to iPhone 4. On the other hand, some aspects, such as “design” and “speed,” may not be frequently commented, but usually more important than “signal connection.” Hence, the frequencybased solution is not able to identify the truly important aspects. Motivated by the above observations, in this paper, we propose an effective approach to automatically identify the important product aspects from consumer reviews. Our assumption is that the important aspects of a product should be the aspects that are frequently commented by consumers, and consumers’ opinions on the important aspects greatly influence their overall opinions on the product. Given the online consumer reviews of a specific product, we first identify the aspects in the reviews using a shallow dependency parser (Wu et al., 2009), and determine consumers’ opinions on these aspects via a sentiment classifier. We then design an aspect ranking algorithm to identify the important aspects by simultaneously taking into account the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. Specifically, we assume that consumer’s overall opinion rating on a product is generated based on a weighted sum of his/her specific opinions on multiple aspects of the product, where the weights essentially measure the degree of importance of the aspects. A probabilistic regression algorithm is then developed to derive these importance weights by leveraging the aspect frequency and the consistency between the overall opinions and the weighted sum of opinions on various aspects. We conduct experiments on 11 popular products in four domains. The consumer reviews on these products are crawled from the prevalent forum Web sites (e.g., cnet.com and viewpoint.com etc.) More details of our review corpus are discussed in Section 3. The experimental results demonstrate the effectiveness of our approach on important aspects identification. Furthermore, we apply the aspect ranking results to the application of document-level sentiment classification by carrying out the term-weighting based on the aspect importance. The results show that our approach can improve the performance significantly. The main contributions of this paper include, 1) We dedicate to the topic of aspect ranking, which aims to automatically identify important aspects of a product from consumer reviews. 2) We develop an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. 3) We apply aspect ranking results to the application of document-level sentiment classification, and improve the performance significantly. There is another work named aspect ranking (Snyder et al., 2007). The task in this work is different from ours. This work mainly focuses on predicting opinionated ratings on aspects rather than identifying important aspects. The rest of this paper is organized as follows. Section 2 elaborates our aspect ranking approach. Section 3 presents the experimental results, while Section 4 introduces the application of document-level sentiment classification. Section 5 reviews related work and Section 6 concludes this paper with future works. 2 Aspect Ranking Framework In this section, we first present some notations and then elaborate the key components of our approach, including the aspect identification, sentiment classification, and aspect ranking algorithm. 2.1 Notations and Problem Formulation Let R = {r1, · · · , r|R|} denotes a set of online consumer reviews of a specific product. Each review r ∈R is associated with an overall opinion rating 1497 Or, and covers several aspects with consumer comments on these aspects. Suppose there are m aspects A = {a1, · · · , am} involved in the review corpus R, where ak is the k-th aspect. We define ork as the opinion on aspect ak in review r. We assume that the overall opinion rating Or is generated based on a weighted sum of the opinions on specific aspects ork (Wang et al., 2010). The weights are denoted as {ωrk}m k=1, each of which essentially measures the degree of importance of the aspect ak in review r. Our task is to derive the important weights of aspects, and identify the important aspects. Next, we will introduce the key components of our approach, including aspect identification that identifies the aspects ak in each review r, aspect sentiment classification which determines consumers’ opinions ork on various aspects, and aspect ranking algorithm that identifies the important aspects. 2.2 Aspect Identification As illustrated in Figure 1, there are usually two types of reviews, Pros and Cons review and free text reviews on the Web. For Pros and Cons reviews, the aspects are identified as the frequent noun terms in the reviews, since the aspects are usually noun or noun phrases (Liu, 2009), and it has been shown that simply extracting the frequent noun terms from the Pros and Cons reviews can get high accurate aspect terms (Liu el al., 2005). To identify the aspects in free text reviews, we first parse each review using the Stanford parser 1, and extract the noun phrases (NP) from the parsing tree as aspect candidates. While these candidates may contain much noise, we leverage the Pros and Cons reviews to assist identify aspects from the candidates. In particular, we explore the frequent noun terms in Pros and Cons reviews as features, and train a one-class SVM (Manevitz et al., 2002) to identify aspects in the candidates. While the obtained aspects may contain some synonym terms, such as “earphone” and “headphone,” we further perform synonym clustering to get unique aspects. Specifically, we first expand each aspect term with its synonym terms obtained from the synonym terms Web site 2, and then cluster the terms to obtain unique aspects based on 1http://nlp.stanford.edu/software/lex-parser.shtml 2http://thesaurus.com unigram feature. 2.3 Aspect Sentiment Classification Since the Pros and Cons reviews explicitly express positive and negative opinions on the aspects, respectively, our task is to determine the opinions in free text reviews. To this end, we here utilize Pros and Cons reviews to train a SVM sentiment classifier. Specifically, we collect sentiment terms in the Pros and Cons reviews as features and represent each review into feature vector using Boolean weighting. Note that we select sentiment terms as those appear in the sentiment lexicon provided by MPQA project (Wilson et al., 2005). With these features, we then train a SVM classifier based on Pros and Cons reviews. Given a free text review, since it may cover various opinions on multiple aspects, we first locate the opinionated expression modifying each aspect, and determine the opinion on the aspect using the learned SVM classifier. In particular, since the opinionated expression on each aspect tends to contain sentiment terms and appear closely to the aspect (Hu and Liu, 2004), we select the expressions which contain sentiment terms and are at the distance of less than 5 from the aspect NP in the parsing tree. 2.4 Aspect Ranking Generally, consumer’s opinion on each specific aspect in the review influences his/her overall opinion on the product. Thus, we assume that the consumer gives the overall opinion rating Or based on the weighted sum of his/her opinion ork on each aspect ak: ∑m k=1 ωrkork, which can be rewritten as ωrT or, where ωr and or are the weight and opinion vectors. Inspired by the work of Wang et al. (2010), we view Or as a sample drawn from a Gaussian Distribution, with mean ωrT or and variance σ2, p(Or) = 1 √ 2πσ2 exp[−(Or −ωrT or)2 2σ2 ]. (1) To model the uncertainty of the importance weights ωr in each review, we assume ωr as a sample drawn from a Multivariate Gaussian Distribution, with µ as the mean vector and Σ as the covariance matrix, p(ωr) = 1 (2π)n/2|Σ|1/2 exp[−1 2(ωr −µ)T Σ−1(ωr −µ)]. (2) 1498 We further incorporate aspect frequency as a prior knowledge to define the distribution of µ and Σ. Specifically, the distribution of µ and Σ is defined based on its Kullback-Leibler (KL) divergence to a prior distribution with a mean vector µ0 and an identity covariance matrix I in Eq.3. Each element in µ0 is defined as the frequency of the corresponding aspect: frequency(ak)/ ∑m i=1 frequency(ai). p(µ, Σ) = exp[−φ · KL(Q(µ, Σ)||Q(µ0, I))], (3) where KL(·, ·) is the KL divergence, Q(µ, Σ) denotes a Multivariate Gaussian Distribution, and φ is a tradeoff parameter. Base on the above definition, the probability of generating the overall opinion rating Or on review r is given as, p(Or|Ψ, r) = ∫ p(Or|ωrT or, σ2) · p(ωr|µ, Σ) · p(µ, Σ)dωr, (4) where Ψ = {ω, µ, Σ, σ2} are the model parameters. Next, we utilize Maximum Log-likelihood (ML) to estimate the model parameters given the consumer reviews corpus. In particular, we aim to find an optimal ˆΨ to maximize the probability of observing the overall opinion ratings in the reviews corpus. ˆΨ = arg max Ψ ∑ r∈R log(p(Or|Ψ, r)) = arg min Ψ (|R| −1) log det(Σ) + ∑ r∈R [log σ2+ (Or−ωrT or)2 σ2 + (ωr −µ)T Σ−1(ωr −µ)]+ (tr(Σ) + (µ0 −µ)T I(µ0 −µ)). (5) For the sake of simplicity, we denote the objective function ∑ r∈R log(p(Or|Ψ, r)) as Γ(Ψ). The derivative of the objective function with respect to each model parameter vanishes at the minimizer: ∂Γ(Ψ) ∂ωr = −(ωrT or−Or)or σ2 −Σ−1(ωr −µ) = 0; (6) ∂Γ(Ψ) ∂µ = ∑ r∈R [−Σ−1(ωr −µ)] −φ · I(µ0 −µ) = 0; (7) ∂Γ(Ψ) ∂Σ = ∑ r∈R {−(Σ−1)T −[−(Σ−1)T (ωr −µ) (ωr −µ)T (Σ−1)T ]} + φ · [ (Σ−1)T −I ] = 0; (8) ∂Γ(Ψ) ∂σ2 = ∑ r∈R (−1 σ2 + (Or−ωrT or)2 σ4 ) = 0, (9) which lead to the following solutions: ˆωr = ( ororT σ2 + Σ−1)−1( Oror σ2 + Σ−1µ); (10) ˆµ = (|R|Σ−1 + φ · I)−1(Σ−1 ∑ r∈R ωr + φ · Iµ0); (11) ˆΣ = {[ 1 φ ∑ r∈R [ (ωr −µ)(ωr −µ)T ] + ( |R|−φ 2φ )2I]1/2 −(|R|−φ) 2φ I}T ; (12) ˆσ2 = 1 |R| ∑ r∈R (Or −ωrT or)2. (13) We can see that the above parameters are involved in each other’s solution. We here utilize Alternating Optimization technique to derive the optimal parameters in an iterative manner. We first hold the parameters µ, Σ and σ2 fixed and update the parameters ωr for each review r ∈R. Then, we update the parameters µ, Σ and σ2 with fixed ωr (r ∈R). These two steps are alternatively iterated until the Eq.5 converges. As a result, we obtain the optimal importance weights ωr which measure the importance of aspects in review r ∈R. We then compute the final importance score ϖk for each aspect ak by integrating its importance score in all the reviews as, ϖk = 1 |R| ∑ r∈R ωrk, k = 1, · · · , m (14) It is worth noting that the aspect frequency is considered again in this integration process. According to the importance score ϖk, we can identify important aspects. 3 Evaluations In this section, we evaluate the effectiveness of our approach on aspect identification, sentiment classification, and aspect ranking. 3.1 Data and Experimental Setting The details of our product review data set is given in Table 1. This data set contains consumer reviews on 11 popular products in 4 domains. These reviews were crawled from the prevalent forum Web sites, including cnet.com, viewpoints.com, reevoo.com and gsmarena.com. All of the reviews were posted 1499 between June, 2009 and Sep 2010. The aspects of the reviews, as well as the opinions on the aspects were manually annotated as the gold standard for evaluations. Product Name Domain Review# Sentence# Canon EOS 450D (Canon EOS) camera 440 628 Fujifilm Finepix AX245W (Fujifilm) camera 541 839 Panasonic Lumix DMC-TZ7 (Panasonic) camera 650 1,546 Apple MacBook Pro (MacBook) laptop 552 4,221 Samsung NC10 (Samsung) laptop 2,712 4,946 Apple iPod Touch 2nd (iPod Touch) MP3 4,567 10,846 Sony NWZ-S639 16GB (Sony NWZ) MP3 341 773 BlackBerry Bold 9700 (BlackBerry) phone 4,070 11,008 iPhone 3GS 16GB (iPhone 3GS) phone 12,418 43,527 Nokia 5800 XpressMusic (Nokia 5800) phone 28,129 75,001 Nokia N95 phone 15,939 44,379 Table 1: Statistics of the Data Sets, # denotes the size of the reviews/sentences. To examine the performance on aspect identification and sentiment classification, we employed F1-measure, which was the combination of precision and recall, as the evaluation metric. To evaluate the performance on aspect ranking, we adopted Normalized Discounted Cumulative Gain at top k (NDCG@k) (Jarvelin and Kekalainen, 2002) as the performance metric. Given an aspect ranking list a1, · · · , ak, NDCG@k is calculated by NDCG@k = 1 Z k ∑ i=1 2t(i) −1 log(1 + i), (15) where t(i) is the function that represents the reward given to the aspect at position i, Z is a normalization term derived from the top k aspects of a perfect ranking, so as to normalize NDCG@k to be within [0, 1]. This evaluation metric will favor the ranking which ranks the most important aspects at the top. For the reward t(i), we labeled each aspect as one of the three scores: Un-important (score 1), Ordinary (score 2) and Important (score 3). Three volunteers were invited in the annotation process as follows. We first collected the top k aspects in all the rankings produced by various evaluated methods (maximum k is 15 in our experiment). We then sampled some reviews covering these aspects, and provided the reviews to each annotator to read. Each review contains the overall opinion rating, the highlighted aspects, and opinion terms. Afterward, the annotators were required to assign an importance score to each aspect. Finally, we took the average of their scorings as the corresponding importance scores of the aspects. In addition, there is only one parameter φ that needs to be tuned in our approach. Throughout the experiments, we empirically set φ as 0.001. 3.2 Evaluations on Aspect Identification We compared our aspect identification approach against two baselines: a) the method proposed by Hu and Liu (2004), which was based on the association rule mining, and b) the method proposed by Wu et al. (2009), which was based on a dependency parser. The results are presented in Table 2. On average, our approach significantly outperforms Hu’s method and Wu’ method in terms of F1-measure by over 5.87% and 3.27%, respectively. In particular, our approach obtains high precision. Such results imply that our approach can accurately identify the aspects from consumer reviews by leveraging the Pros and Cons reviews. Data set Hu’s Method Wu’s Method Our Method Canon EOS 0.681 0.686 0.728 Fujifilm 0.685 0.666 0.710 Panasonic 0.636 0.661 0.706 MacBook 0.680 0.733 0.747 Samsung 0.594 0.631 0.712 iPod Touch 0.650 0.660 0.718 Sony NWZ 0.631 0.692 0.760 BlackBerry 0.721 0.730 0.734 iPhone 3GS 0.697 0.736 0.740 Nokia 5800 0.715 0.745 0.747 Nokia N95 0.700 0.737 0.741 Table 2: Evaluations on Aspect Identification. * significant t-test, p-values<0.05. 3.3 Evaluations on Sentiment Classification In this experiment, we implemented the following sentiment classification methods (Pang and Lee, 2008): 1) Unsupervised method. We employed one unsupervised method which was based on opinionated term counting via SentiWordNet (Ohana et al., 2009). 2) Supervised method. We employed three supervised methods proposed in Pang et al. (2002), including Na¨ıve Bayes (NB), Maximum Entropy (ME), SVM. These classifiers were trained based on the Pros and Cons reviews as described in Section 2.3. 1500 The comparison results are showed in Table 3. We can see that supervised methods significantly outperform unsupervised method. For example, the SVM classifier outperforms the unsupervised method in terms of average F1-measure by over 10.37%. Thus, we can deduce from such results that the Pros and Cons reviews are useful for sentiment classification. In addition, among the supervised classifiers, SVM classifier performs the best in most products, which is consistent with the previous research (Pang et al., 2002). Data set Senti NB SVM ME Canon EOS 0.628 0.720 0.739 0.726 Fujifilm 0.690 0.781 0.791 0.778 Panasonic 0.625 0.694 0.719 0.697 MacBook 0.708 0.820 0.828 0.797 Samsung 0.675 0.723 0.717 0.714 iPod Touch 0.711 0.792 0.805 0.791 Sony NWZ 0.621 0.722 0.737 0.725 BlackBerry 0.699 0.819 0.794 0.788 iPhone 3GS 0.717 0.811 0.829 0.822 Nokia 5800 0.736 0.840 0.851 0.817 Nokia N95 0.706 0.829 0.849 0.826 Table 3: Evaluations on Sentiment Classification. Senti denotes the method based on SentiWordNet. * significant t-test, p-values<0.05. 3.4 Evaluations on Aspect Ranking In this section, we compared our aspect ranking algorithm against the following three methods. 1) Frequency-based method. The method ranks the aspects based on aspect frequency. 2) Correlation-based method. This method measures the correlation between the opinions on specific aspects and the overall opinion. It counts the number of the cases when such two kinds of opinions are consistent, and ranks the aspects based on the number of the consistent cases. 3) Hybrid method. This method captures both the aspect frequency and correlation by a linear combination, as λ· Frequency-based Ranking + (1 −λ)· Correlation-based Ranking, where λ is set to 0.5. The comparison results are showed in Table 4. On average, our approach outperforms the frequencybased method, correlation-based method, and hybrid method in terms of NDCG@5 by over 6.24%, 5.79% and 5.56%, respectively. It improves the performance over such three methods in terms of NDCG@10 by over 3.47%, 2.94% and 2.58%, respectively, while in terms of NDCG@15 by over 4.08%, 3.04% and 3.49%, respectively. We can deduce from the results that our aspect ranking algorithm can effectively identify the important aspects from consumer reviews by leveraging the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. Table 5 shows the aspect ranking results of these four methods. Due to the space limitation, we here only show top 10 aspects of the product iphone 3GS. We can see that our approach performs better than the others. For example, the aspect “phone” is ranked at the top by the other methods. However, “phone” is a general but not important aspect. # Frequency Correlated Hybrid Our Method 1 Phone Phone Phone Usability 2 Usability Usability Usability Apps 3 3G Apps Apps 3G 4 Apps 3G 3G Battery 5 Camera Camera Camera Looking 6 Feature Looking Looking Storage 7 Looking Feature Feature Price 8 Battery Screen Battery Software 9 Screen Battery Screen Camera 10 Flash Bluetooth Flash Call quality Table 5: iPhone 3GS Aspect Ranking Results. To further investigate the reasonability of our ranking results, we refer to one of the public user feedback reports, the “china unicom 100 customers iPhone user feedback report” (Chinaunicom Report, 2009). The report demonstrates that the top four aspects of iPhone product, which users most concern with, are “3G Network” (30%), “usability” (30%), “out-looking design” (26%), “application” (15%). All of these aspects are in the top 10 of our ranking results. Therefore, we can conclude that our approach is able to automatically identify the important aspects from numerous consumer reviews. 4 Applications The identification of important aspects can support a wide range of applications. For example, we can 1501 Frequency Correlation Hybrid Our Method Data set @5 @10 @15 @5 @10 @15 @5 @10 @15 @5 @10 @15 Canon EOS 0.735 0.771 0.740 0.735 0.762 0.779 0.735 0.798 0.742 0.862 0.824 0.794 Fujifilm 0.816 0.705 0.693 0.760 0.756 0.680 0.816 0.759 0.682 0.863 0.801 0.760 Panasonic 0.744 0.807 0.783 0.763 0.815 0.792 0.744 0.804 0.786 0.796 0.834 0.815 MacBook 0.744 0.771 0.762 0.763 0.746 0.769 0.763 0.785 0.772 0.874 0.776 0.760 Samsung 0.964 0.765 0.794 0.964 0.820 0.840 0.964 0.820 0.838 0.968 0.826 0.854 iPod Touch 0.836 0.830 0.727 0.959 0.851 0.744 0.948 0.785 0.733 0.959 0.817 0.801 Sony NWZ 0.937 0.743 0.742 0.937 0.781 0.797 0.937 0.740 0.794 0.944 0.775 0.815 BlackBerry 0.837 0.824 0.766 0.847 0.825 0.771 0.847 0.829 0.768 0.874 0.797 0.779 iPhone 3GS 0.897 0.836 0.832 0.886 0.814 0.825 0.886 0.829 0.826 0.948 0.902 0.860 Nokia 5800 0.834 0.779 0.796 0.834 0.781 0.779 0.834 0.781 0.779 0.903 0.811 0.814 Nokia N95 0.675 0.680 0.717 0.619 0.619 0.691 0.619 0.678 0.696 0.716 0.731 0.748 Table 4: Evaluations on Aspect Ranking. @5, @10, @15 denote the evaluation metrics of NDCG@5, NDCG@10, and NDCG@15, respectively. * significant t-test, p-values<0.05. provide product comparison on the important aspects to users, so that users can make wise purchase decisions conveniently. In the following, we apply the aspect ranking results to assist document-level review sentiment classification. Generally, a review document contains consumer’s positive/negative opinions on various aspects of the product. It is difficult to get the accurate overall opinion of the whole review without knowing the importance of these aspects. In addition, when we learn a document-level sentiment classifier, the features generated from unimportant aspects lack of discriminability and thus may deteriorate the performance of the classifier (Fang et al., 2010). While the important aspects and the sentiment terms on these aspects can greatly influence the overall opinions of the review, they are highly likely to be discriminative features for sentiment classification. These observations motivate us to utilize aspect ranking results to assist classifying the sentiment of review documents. Specifically, we randomly sampled 100 reviews of each product as the testing data and used the remaining reviews as the training data. We first utilized our approach to identify the importance aspects from the training data. We then explored the aspect terms and sentiment terms as features, based on which each review is represented as a feature vector. Here, we give more emphasis on the important aspects and the sentiment terms that modify these aspects. In particular, we set the term-weighting as 1 + φ · ϖk, where ϖk is the importance score of the aspect ak, φ is set to 100. Based on the weighted features, we then trained a SVM classifier using the training reviews to determine the overall opinions on the testing reviews. For the performance comparison, we compared our approach against two baselines, including Boolean weighting method and frequency weighting (tf) method (Paltoglou et al., 2010) that do not utilize the importance of aspects. The comparison results are shown in Table 6. We can see that our approach (IA) significantly outperforms the other methods in terms of average F1-measure by over 2.79% and 4.07%, respectively. The results also show that the Boolean weighting method outperforms the frequency weighting method in terms of average F1-measure by over 1.25%, which are consistent with the previous research by Pang et al. (2002). On the other hand, from the IA weighting formula, we observe that without using the important aspects, our term-weighting function will be equal to Boolean weighting. Thus, we can speculate that the identification of important aspects is beneficial to improving the performance of documentlevel sentiment classification. 5 Related Work Existing researches mainly focused on determining opinions on the reviews, or identifying aspects from these reviews. They viewed each aspect equally without distinguishing the important ones. In this section, we review existing researches related to our work. Analysis of the opinion on whole review text had 1502 SV M + Boolean SV M + tf SV M + IA Data set P R F1 P R F1 P R F1 Canon EOS 0.689 0.663 0.676 0.679 0.654 0.666 0.704 0.721 0.713 Fujifilm 0.700 0.687 0.693 0.690 0.670 0.680 0.731 0.724 0.727 Panasonic 0.659 0.717 0.687 0.650 0.693 0.671 0.696 0.713 0.705 MacBook 0.744 0.700 0.721 0.768 0.675 0.718 0.790 0.717 0.752 Samsung 0.755 0.690 0.721 0.716 0.725 0.720 0.732 0.765 0.748 iPod Touch 0.686 0.746 0.714 0.718 0.667 0.691 0.749 0.726 0.737 Sony NWZ 0.719 0.652 0.684 0.665 0.646 0.655 0.732 0.684 0.707 BlackBerry 0.763 0.719 0.740 0.752 0.709 0.730 0.782 0.758 0.770 iPhone 3GS 0.777 0.775 0.776 0.772 0.762 0.767 0.820 0.788 0.804 Nokia 5800 0.755 0.836 0.793 0.744 0.815 0.778 0.805 0.821 0.813 Nokia N95 0.722 0.699 0.710 0.695 0.708 0.701 0.768 0.732 0.750 Table 6: Evaluations on Term Weighting methods for Document-level Review Sentiment Classification. IA denotes the term weighing based on the important aspects. * significant t-test, p-values<0.05. been extensively studied (Pang and Lee, 2008). Earlier research had been studied unsupervised (Kim et al., 2004), supervised (Pang et al., 2002; Pang et al., 2005) and semi-supervised approaches (Goldberg et al., 2006) for the classification. For example, Mullen et al. (2004) proposed an unsupervised classification method which exploited pointwise mutual information (PMI) with syntactic relations and other attributes. Pang et al. (2002) explored several machine learning classifiers, including Na¨ıve Bayes, Maximum Entropy, SVM, for sentiment classification. Goldberg et al. (2006) classified the sentiment of the review using the graph-based semi-supervised learning techniques, while Li el al. (2009) tackled the problem using matrix factorization techniques with lexical prior knowledge. Since the consumer reviews usually expressed opinions on multiple aspects, some works had drilled down to the aspect-level sentiment analysis, which aimed to identify the aspects from the reviews and to determine the opinions on the specific aspects instead of the overall opinion. For the topic of aspect identification, Hu and Liu (2004) presented the association mining method to extract the frequent terms as the aspects. Subsequently, Popescu et al. (2005) proposed their system OPINE, which extracted the aspects based on the KnowItAll Web information extraction system (Etzioni et al., 2005). Liu el al. (2005) proposed a supervised method based on language pattern mining to identify the aspects in the reviews. Later, Mei et al. (2007) proposed a probabilistic topic model to capture the mixture of aspects and sentiments simultaneously. Afterwards, Wu et al. (2009) utilized the dependency parser to extract the noun phrases and verb phrases from the reviews as the aspect candidates. They then trained a language model to refine the candidate set, and to obtain the aspects. On the other hand, for the topic of sentiment classification on the specific aspect, Snyder et al. (2007) considered the situation when the consumers’ opinions on one aspect could influence their opinions on others. They thus built a graph to analyze the meta-relations between opinions, such as agreement and contrast. And they proposed a Good Grief algorithm to leveraging such meta-relations to improve the prediction accuracy of aspect opinion ratings. In addition, Wang et al. (2010) proposed the topic of latent aspect rating which aimed to infer the opinion rating on the aspect. They first employed a bootstrapping-based algorithm to identify the major aspects via a few seed word aspects. They then proposed a generative Latent Rating Regression model (LRR) to infer aspect opinion ratings based on the review content and the associated overall rating. While there were usually huge collection of reviews, some works had concerned the topic of aspect-based sentiment summarization to combat the information overload. They aimed to summarize all the reviews and integrate major opinions on various aspects for a given product. For example, Titov et al. (2008) explored a topic modeling method to generate a summary based on multiple aspects. They utilized topics to describe aspects and incor1503 porated a regression model fed by the ground-truth opinion ratings. Additionally, Lu el al. (2009) proposed a structured PLSA method, which modeled the dependency structure of terms, to extract the aspects in the reviews. They then aggregated opinions on each specific aspects and selected representative text segment to generate a summary. In addition, some works proposed the topic of product ranking which aimed to identify the best products for each specific aspect (Zhang et al., 2010). They used a PageRank style algorithm to mine the aspect-opinion graph, and to rank the products for each aspect. Different from previous researches, we dedicate our work to identifying the important aspects from the consumer reviews of a specific product. 6 Conclusions and Future Works In this paper, we have proposed to identify the important aspects of a product from online consumer reviews. Our assumption is that the important aspects of a product should be the aspects that are frequently commented by consumers and consumers’ opinions on the important aspects greatly influence their overall opinions on the product. Based on this assumption, we have developed an aspect ranking algorithm to identify the important aspects by simultaneously considering the aspect frequency and the influence of consumers’ opinions given to each aspect on their overall opinions. We have conducted experiments on 11 popular products in four domains. Experimental results have demonstrated the effectiveness of our approach on important aspects identification. We have further applied the aspect ranking results to the application of document-level sentiment classification, and have significantly improved the classification performance. In the future, we will apply our approach to support other applications. Acknowledgments This work is supported in part by NUS-Tsinghua Extreme Search (NExT) project under the grant number: R-252-300-001-490. We give warm thanks to the project and anonymous reviewers for their comments. References P. Beineke, T. Hastie, C. Manning, and S. Vaithyanathan. An Exploration of Sentiment Summarization. AAAI, 2003. G. Carenini, R.T. Ng, and E. Zwart. Extracting Knowledge from Evaluative Text. K-CAP, 2005. G. Carenini, R.T. Ng, and E. Zwart. Multi-document Summarization of Evaluative Text. ACL, 2006. China Unicom 100 Customers iPhone User Feedback Report, 2009. Y. Choi and C. Cardie. Hierarchical Sequential Learning for Extracting Opinions and Their Attributes. ACL, 2010. H. Cui, V. Mittal, and M. Datar. Comparative Experiments on Sentiment Classification for Online Product Reviews. AAAI, 2006. S. Dasgupta and V. Ng. Mine the Easy, Classify the Hard: A Semi-supervised Approach to Automatic Sentiment Classification. ACL, 2009. K. Dave, S. Lawrence, and D.M. Pennock. Opinion Extraction and Semantic Classification of Product Reviews. WWW, 2003. A. Esuli and F. Sebastiani. A Publicly Available Lexical Resource for Opinion Mining. LREC, 2006. O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. Unsupervised Named-entity Extraction from the Web: An Experimental Study. Artificial Intelligence, 2005. J. Fang, B. Price, and L. Price. Pruning Non-Informative Text Through Non-Expert Annotations to Improve Aspect-Level Sentiment Classification. COLING, 2010. O. Feiguina and G. Lapalme. Query-based Summarization of Customer Reviews. AI, 2007. Forrester Research. State of Retailing Online 2009: Marketing Report. http://www.shop.org/soro, 2009. A. Goldberg and X. Zhu. Seeing Stars when There aren’t Many Stars: Graph-based Semi-supervised Learning for Sentiment Categorization. ACL, 2006. M. Gamon, A. Aue, S. Corston-Oliver, and E. Ringger. Pulse: Mining Customer Opinions from Free Text. IDA, 2005. M. Hu and B. Liu. Mining and Summarizing Customer Reviews. SIGKDD, 2004. K. Jarvelin and J. Kekalainen. Cumulated Gain-based Evaluation of IR Techniques. TOIS, 2002. S. Kim and E. Hovy. Determining the Sentiment of Opinions. COLING, 2004. J. Kim, J.J. Li, and J.H. Lee. Discovering the Discriminative Views: Measuring Term Weights for Sentiment Analysis. ACL, 2009. 1504 Kelsey Research and comscore. Online ConsumerGenerated Reviews Have Significant Impact on Offline Purchase Behavior. K. Lerman, S. Blair-Goldensohn, and R. McDonald. Sentiment Summarization: Evaluating and Learning User Preferences. EACL, 2009. B. Li, L. Zhou, S. Feng, and K.F. Wong. A Unified Graph Model for Sentence-Based Opinion Retrieval. ACL, 2010. T. Li and Y. Zhang, and V. Sindhwani. A Non-negative Matrix Tri-factorization Approach to Sentiment Classification with Lexical Prior Knowledge. ACL, 2009. B. Liu, M. Hu, and J. Cheng. Opinion Observer: Analyzing and Comparing Opinions on the Web. WWW, 2005. B. Liu. Handbook Chapter: Sentiment Analysis and Subjectivity. Handbook of Natural Language Processing. Marcel Dekker, Inc. New York, NY, USA, 2009. Y. Lu, C. Zhai, and N. Sundaresan. Rated Aspect Summarization of Short Comments. WWW, 2009. L.M. Manevitz and M. Yousef. One-class svms for Document Classification. The Journal of Machine Learning, 2002. R. McDonal, K. Hannan, T. Neylon, M. Wells, and J. Reynar. Structured Models for Fine-to-coarse Sentiment Analysis. ACL, 2007. Q. Mei, X. Ling, M. Wondra, H. Su, and C.X. Zhai. Topic Sentiment Mixture: Modeling Facets and Opinions in Weblogs. WWW, 2007. H.J. Min and J.C. Park. Toward Finer-grained Sentiment Identification in Product Reviews Through Linguistic and Ontological Analyses. ACL, 2009. T. Mullen and N. Collier. Sentiment Analysis using Support Vector Machines with Diverse Information Sources. EMNLP, 2004. N. Nanas, V. Uren, and A.D. Roeck. Building and Applying a Concept Hierarchy Representation of a User Profile. SIGIR, 2003. H. Nishikawa, T. Hasegawa, Y. Matsuo, and G. Kikui. Optimizing Informativeness and Readability for Sentiment Summarization. ACL, 2010. B. Ohana and B. Tierney. Sentiment Classification of Reviews Using SentiWordNet. IT&T Conference, 2009. G. Paltoglou and M. Thelwall. A study of Information Retrieval Weighting Schemes for Sentiment Analysis. ACL, 2010. B. Pang, L. Lee, and S. Vaithyanathan. Thumbs up? Sentiment Classification using Machine Learning Techniques. EMNLP, 2002. B. Pang, L. Lee, and S. Vaithyanathan. A Sentimental Education: Sentiment Analysis using Subjectivity Summarization based on Minimum cuts Techniques. ACL, 2004. B. Pang and L. Lee. Seeing stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales. ACL, 2005. B. Pang and L. Lee. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2008. A.-M. Popescu and O. Etzioni. Extracting Product Features and Opinions from Reviews. HLT/EMNLP, 2005. R. Prabowo and M. Thelwall. Sentiment analysis: A Combined Approach. Journal of Informetrics, 2009. G. Qiu, B. Liu, J. Bu, and C. Chen.. Expanding Domain Sentiment Lexicon through Double Propagation. IJCAI, 2009. M. Sanderson and B. Croft. Document-word Coregularization for Semi-supervised Sentiment Analysis. ICDM, 2008. B. Snyder and R. Barzilay. Multiple Aspect Ranking using the Good Grief Algorithm. NAACL HLT, 2007. S. Somasundaran, G. Namata, L. Getoor, and J. Wiebe. Opinion Graphs for Polarity and Discourse Classification. ACL, 2009. Q. Su, X. Xu, H. Guo, X. Wu, X. Zhang, B. Swen, and Z. Su. Hidden Sentiment Association in Chinese Web Opinion Mining. WWW, 2008. C. Toprak, N. Jakob, and I. Gurevych. Sentence and Expression Level Annotation of Opinions in UserGenerated Discourse. ACL, 2010. P. Turney. Thumbs up or Thumbs down? Semantic Orientation Applied to Unsupervised Classification of Reviews. ACL, 2002. I. Titov and R. McDonald. A Joint Model of Text and Aspect Ratings for Sentiment Summarization. ACL, 2008. H. Wang, Y. Lu, and C.X. Zhai. Latent Aspect Rating Analysis on Review Text Data: A Rating Regression Approach. KDD, 2010. B. Wei and C. Pal. Cross Lingual Adaptation: An Experiment on Sentiment Classifications. ACL, 2010. T. Wilson, J. Wiebe, and P. Hoffmann. Recognizing Contextual Polarity in Phrase-level Sentiment Analysis. HLT/EMNLP, 2005. T. Wilson and J. Wiebe. Annotating Attributions and Private States. ACL, 2005. Y. Wu, Q. Zhang, X. Huang, and L. Wu. Phrase Dependency Parsing for Opinion Mining. ACL, 2009. K. Zhang, R. Narayanan, and A. Choudhary. Voice of the Customers: Mining Online Customer Reviews for Product Feature-based Ranking. WOSN, 2010. J. Zhu, H. Wang, and B.K. Tsou. Aspect-based Sentence Segmentation for Sentiment Summarization. TSA, 2009. L. Zhuang, F. Jing, and X.Y. Zhu. Movie Review Mining and Summarization. CIKM, 2006. 1505
2011
150
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1506–1515, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Collective Classification of Congressional Floor-Debate Transcripts Clinton Burfoot, Steven Bird and Timothy Baldwin Department of Computer Science and Software Engineering University of Melbourne, VIC 3010, Australia {cburfoot, sb, tim}@csse.unimelb.edu.au Abstract This paper explores approaches to sentiment classification of U.S. Congressional floordebate transcripts. Collective classification techniques are used to take advantage of the informal citation structure present in the debates. We use a range of methods based on local and global formulations and introduce novel approaches for incorporating the outputs of machine learners into collective classification algorithms. Our experimental evaluation shows that the mean-field algorithm obtains the best results for the task, significantly outperforming the benchmark technique. 1 Introduction Supervised document classification is a well-studied task. Research has been performed across many document types with a variety of classification tasks. Examples are topic classification of newswire articles (Yang and Liu, 1999), sentiment classification of movie reviews (Pang et al., 2002), and satire classification of news articles (Burfoot and Baldwin, 2009). This and other work has established the usefulness of document classifiers as stand-alone systems and as components of broader NLP systems. This paper deals with methods relevant to supervised document classification in domains with network structures, where collective classification can yield better performance than approaches that consider documents in isolation. Simply put, a network structure is any set of relationships between documents that can be used to assist the document classification process. Web encyclopedias and scholarly publications are two examples of document domains where network structures have been used to assist classification (Gantner and Schmidt-Thieme, 2009; Cao and Gao, 2005). The contribution of this research is in four parts: (1) we introduce an approach that gives better than state of the art performance for collective classification on the ConVote corpus of congressional debate transcripts (Thomas et al., 2006); (2) we provide a comparative overview of collective document classification techniques to assist researchers in choosing an algorithm for collective document classification tasks; (3) we demonstrate effective novel approaches for incorporating the outputs of SVM classifiers into collective classifiers; and (4) we demonstrate effective novel feature models for iterative local classification of debate transcript data. In the next section (Section 2) we provide a formal definition of collective classification and describe the ConVote corpus that is the basis for our experimental evaluation. Subsequently, we describe and critique the established benchmark approach for congressional floor-debate transcript classification, before describing approaches based on three alternative collective classification algorithms (Section 3). We then present an experimental evaluation (Section 4). Finally, we describe related work (Section 5) and offer analysis and conclusions (Section 6). 2 Task Definition 2.1 Collective Classification Given a network and an object o in the network, there are three types of correlations that can be used 1506 to infer a label for o: (1) the correlations between the label of o and its observed attributes; (2) the correlations between the label of o and the observed attributes and labels of nodes connected to o; and (3) the correlations between the label of o and the unobserved labels of objects connected to o (Sen et al., 2008). Standard approaches to classification generally ignore any network information and only take into account the correlations in (1). Each object is classified as an individual instance with features derived from its observed attributes. Collective classification takes advantage of the network by using all three sources. Instances may have features derived from their source objects or from other objects. Classification proceeds in a joint fashion so that the label given to each instance takes into account the labels given to all of the other instances. Formally, collective classification takes a graph, made up of nodes V = {V1, . . . , Vn} and edges E. The task is to label the nodes Vi ∈V from a label set L = {L1, . . . , Lq}, making use of the graph in the form of a neighborhood function N = {N1, . . . , Nn}, where Ni ⊆V \ {Vi}. 2.2 The ConVote Corpus ConVote, compiled by Thomas et al. (2006), is a corpus of U.S. congressional debate transcripts. It consists of 3,857 speeches organized into 53 debates on specific pieces of legislation. Each speech is tagged with the identity of the speaker and a “for” or “against” label derived from congressional voting records. In addition, places where one speaker cites another have been annotated, as shown in Figure 1. We apply collective classification to ConVote debates by letting V refer to the individual speakers in a debate and populating N using the citation graph between speakers. We set L = {y, n}, corresponding to “for” and “against” votes respectively. The text of each instance is the concatenation of the speeches by a speaker within a debate. This results in a corpus of 1,699 instances with a roughly even class distribution. Approximately 70% of these are connected, i.e. they are the source or target of one or more citations. The remainder are isolated. 3 Collective Classification Techniques In this section we describe techniques for performing collective classification on the ConVote corpus. We differentiate between dual-classifier and iterative-classifier approaches. Dual-classifier approach: This approach uses a collective classification algorithm that takes inputs from two classifiers: (1) a content-only classifier that determines the likelihood of a y or n label for an instance given its text content; and (2) a citation classifier that determines, based on citation information, whether a given pair of instances are “same class” or “different class”. Let Ψ denote a set of functions representing the classification preferences produced by the contentonly and citation classifiers: • For each Vi ∈V, φi ∈Ψ is a function φi: L → R+ ∪{0}. • For each (Vi, Vj) ∈E, ψij ∈Ψ is a function ψij: L × L →R+ ∪{0}. Later in this section we will describe three collective classification algorithms capable of performing overall classification based on these inputs: (1) the minimum-cut approach, which is the benchmark for collective classification with ConVote, established by Thomas et al.; (2) loopy belief propagation; and (3) mean-field. We will show that these latter two techniques, which are both approximate solutions for Markov random fields, are superior to minimumcut for the task. Figure 2 gives a visual overview of the dualclassifier approach. Iterative-classifier approach: This approach incorporates content-only and citation features into a single local classifier that works on the assumption that correct neighbor labels are already known. This approach represents a marked deviation from the dual-classifier approach and offers unique advantages. It is fully described in Section 3.4. Figure 3 gives a visual overview of the iterativeclassifier approach. For a detailed introduction to collective classification see Sen et al. (2008). 1507 Debate 006 Speaker 400378 [against] Mr. Speaker, . . . all over Washington and in the country, people are talking today about the majority’s last-minute decision to abandon ... . . . Speaker 400115 [for] . . . Mr. Speaker, . . . I just want to say to the gentlewoman from New York that every single member of this institution . . . . . . Figure 1: Sample speech fragments from the ConVote corpus. The phrase gentlewoman from New York by speaker 400115 is annotated as a reference to speaker 400378. Debate content Citation vectors Content-only vectors Content-only classifications Citation classifications Content-only and citation scores Overall classifications Extract features Extract features SVM SVM Normalise Normalise MF/LBP/Mincut Figure 2: Dual-classifier approach. Debate content Content-only vectors Content-only classifications Local vectors Local classifications Overall classifications Extract features SVM Combine content-only and citation features SVM Update citation features Terminate iteration Figure 3: Iterative-classifier approach. 3.1 Dual-classifier Approach with Minimum-cut Thomas et al. use linear kernel SVMs as their base classifiers. The content-only classifier is trained to predict y or n based on the unigram presence features found in speeches. The citation classifier is trained to predict “same class” or “different class” labels based on the unigram presence features found in the context windows (30 tokens before, 20 tokens after) surrounding citations for each pair of speakers in the debate. The decision plane distance computed by the content-only SVM is normalized to a positive real number and stripped of outliers: φi(y) =      1 di > 2σi;  1 + di 2σi  /2 |di| ≤2σi; 0 di < −2σi where σi is the standard deviation of the decision plane distance, di, over all of the instances in the debate and φi(n) = 1−φi(y). The citation classifier output is processed similarly:1 ψij(y, y) =    0 dij < θ; α · dij/4σij θ ≤dij ≤4σij; α dij > 4σij where σij is the standard deviation of the decision plane distance, dij over all of the citations in the debate and ψij(n, n) = ψij(y, y). The α and θ variables are free parameters. A given class assignment v is assigned a cost that is the sum of per-instance and per-pair class costs derived from the content-only and citation classifiers respectively: c(v) = X Vi∈V φi(¯vi) + X (Vi,Vj)∈E:vi̸=vj ψij(vi, vi) where vi is the label of node Vi and ¯vi denotes the complement class of vi. 1Thomas et al. classify each citation context window separately, so their ψ values are actually calculated in a slightly more complicated way. We adopted the present approach for conceptual simplicity and because it gave superior performance in preliminary experiments. 1508 The cost function is modeled in a flow graph where extra source and sink nodes represent the y and n labels respectively. Each node in V is connected to the source and sink with capacities φi(y) and φi(n) respectively. Pairs classified in the “same class” class are linked with capacities defined by ψ. An exact optimum and corresponding overall classification is efficiently computed by finding the minimum-cut of the flow graph (Blum and Chawla, 2001). The free parameters are tuned on a set of held-out data. Thomas et al. demonstrate improvements over content-only classification, without attempting to show that the approach does better than any alternatives; the main appeal is the simplicity of the flow graph model. There are a number of theoretical limitations to the approach, which we now discuss. As Thomas et al. point out, the model has no way of representing the “different class” output from the citation classifier and these citations must be discarded. This, to us, is the most significant problem with the model. Inspection of the corpus shows that approximately 80% of citations indicate agreement, meaning that for the present task the impact of discarding this information may not be large. However, the primary utility in collective approaches lies in their ability to fill in gaps in information not picked up by content-only classification. All available link information should be applied to this end, so we need models capable of accepting both positive and negative information. The normalization techniques used for converting SVM outputs to graph weights are somewhat arbitrary. The use of standard deviations appears problematic as, intuitively, the strength of a classification should be independent of its variance. As a case in point, consider a set of instances in a debate all classified as similarly weak positives by the SVM. Use of ψi as defined above would lead to these being erroneously assigned the maximum score because of their low variance. The minimum-cut approach places instances in either the positive or negative class depending on which side of the cut they fall on. This means that no measure of classification confidence is available. This extra information is useful at the very least to give a human user an idea of how much to trust the classification. A measure of classification confidence may also be necessary for incorporation into a broader system, e.g., a meta-classifier (Andreevskaia and Bergler, 2008; Li and Zong, 2008). Tuning the α and θ parameters is likely to become a source of inaccuracy in cases where the tuning and test debates have dissimilar link structures. For example, if the tuning debates tend to have fewer, more accurate links the α parameter will be higher. This will not produce good results if the test debates have more frequent, less accurate links. 3.2 Heuristics for Improving Minimum-cut Bansal et al. (2008) offer preliminary work describing additions to the Thomas et al. minimum-cut approach to incorporate “different class” citation classifications. They use post hoc adjustments of graph capacities based on simple heuristics. Two of the three approaches they trial appear to offer performance improvements: The SetTo heuristic: This heuristic works through E in order and tries to force Vi and Vj into different classes for every “different class” (dij < 0) citation classifier output where i < j. It does this by altering the four relevant content-only preferences, φi(y), φi(n), φj(y), and φj(n). Assume without loss of generality that the largest of these values is φi(y). If this preference is respected, it follows that Vj should be put into class n. Bansal et al. instantiate this chain of reasoning by setting: • φ′ i(y) = max(β, φi(y)) • φ′ j(n) = max(β, φj(n)) where φ′ is the replacement content-only function, β is a free parameter ∈(.5, 1], φ′ i(n) = 1 −φ′ i(y), and φ′ j(y) = 1 −φ′ j(y). The IncBy heuristic: This heuristic is a more conservative version of the SetTo heuristic. Instead of replacing the content-only preferences with fixed constants, it increments and decrements the previous values so they are somewhat preserved: • φ′ i(y) = min(1, φi(y) + β) • φ′ j(n) = min(1, φj(n) + β) There are theoretical shortcomings with these approaches. The most obvious problem is the arbitrary nature of the manipulations, which produce a flow 1509 graph that has an indistinct relationship to the outputs of the two classifiers. Bensal et al. trial a range of β values, with varying impacts on performance. No attempt is made to demonstrate a method for choosing a good β value. It is not clear that the tuning approach used to set α and θ would be successful here. In any case, having a third parameter to tune would make the process more time-consuming and increase the risks of incorrect tuning, described above. As Bansal et al. point out, proceeding through E in order means that earlier changes may be undone for speakers who have multiple “different class” citations. Finally, we note that the confidence of the citation classifier is not embodied in the graph structure. The most marginal “different class” citation, classified just on the negative side of the decision plane, is treated identically to the most confident one furthest from the decision plane. 3.3 Dual-classifier Approach with Markov Random Field Approximations A pairwise Markov random field (Taskar et al., 2002) is given by the pair (G, Ψ), where G and Ψ are as previously defined, Ψ being re-termed as a set of clique potentials. Given an assignment v to the nodes V, the pairwise Markov random field is associated with the probability distribution: P(v) = 1 Z Y Vi∈V φi(vi) Y (Vi,Vj)∈E ψij(vi, vj) where: Z = X v′ Y Vi∈V φi(v′ i) Y (Vi,Vj)∈E ψij(v′ i, v′ j) and v′ i denotes the label of Vi for an alternative assignment in v′. In general, exact inference over a pairwise Markov random field is known to be NP-hard. There are certain conditions under which exact inference is tractable, but real-world data is not guaranteed to satisfy these. A class of approximate inference algorithms known as variational methods (Jordan et al., 1999) solve this problem by substituting a simpler “trial” distribution which is fitted to the Markov random field distribution. Loopy Belief Propagation: Applied to a pairwise Markov random field, loopy belief propagation is a message passing algorithm that can be concisely expressed as the following set of equations: mi→j(vj) = α X vi∈L {ψij(vi, vj)φi(vi) Y Vk∈Ni∩V\Vj mk→i(vi), ∀vj ∈L} bi(vi) = αφi(vi) Y Vj∈Ni∩V mj→i(vi), ∀vi ∈L where mi→j is a message sent by Vi to Vj and α is a normalization constant that ensures that each message and each set of marginal probabilities sum to 1. The algorithm proceeds by making each node communicate with its neighbors until the messages stabilize. The marginal probability is then derived by calculating bi(vi). Mean-Field: The basic mean-field algorithm can be described with the equation: bj(vj) = αφj(vj) Y Vi∈Nj∩V Y vi∈L ψbi(vi) ij (vi, vj), vj ∈L where α is a normalization constant that ensures P vj bj(vj) = 1. The algorithm computes the fixed point equation for every node and continues to do so until the marginal probabilities bj(vj) stabilize. Mean-field can be shown to be a variational method in the same way as loopy belief propagation, using a simpler trial distribution. For details see Sen et al. (2008). Probabilistic SVM Normalisation: Unlike minimum-cut, the Markov random field approaches have inherent support for the “different class” output of the citation classifier. This allows us to apply a more principled SVM normalisation technique. Platt (1999) describes a technique for converting the output of an SVM classifier to a calibrated posterior probability. Platt finds that the posterior can be fit using a parametric form of a sigmoid: P(y = 1|d) = 1 1 + exp(Ad + B) This is equivalent to assuming that the output of the SVM is proportional to the log odds of a positive example. Experimental analysis shows error rate is 1510 improved over a plain linear SVM and probabilities are of comparable quality to those produced using a regularized likelihood kernel method. By applying this technique to the base classifiers, we can produce new, simpler Ψ functions, φi(y) = Pi and ψij(y, y) = Pij where Pi is the probabilistic normalized output of the content-only classifier and Pij is the probabilistic normalized output of the citation classifier. This approach addresses the problems with the Thomas et al. method where the use of standard deviations can produce skewed normalizations (see Section 3.1). By using probabilities we also open up the possibility of replacing the SVM classifiers with any other model than can be made to produce a probability. Note also that there are no parameters to tune. 3.4 Iterative Classifier Approach The dual-classifier approaches described above represent global attempts to solve the collective classification problem. We can choose to narrow our focus to the local level, in which we aim to produce the best classification for a single instance with the assumption that all other parts of the problem (i.e. the correct labeling of the other instances) are solved. The Iterative Classification Algorithm (Bilgic et al., 2007), defined in Algorithm 1, is a simple technique for performing collective classification using such a local classifier. After bootstrapping with a content-only classifier, it repeatedly generates new estimates for vi based on its current knowledge of Ni. The algorithm terminates when the predictions stabilize or a fixed number of iterations is completed. Each iteration is completed using a newly generated ordering O, over the instances V. We propose three feature models for the local classifier. Citation presence and Citation count: Given that the majority of citations represent the “same class” relationship (see Section 3.1), we can anticipate that content-only classification performance will be improved if we add features to represent the presence of neighbours of each class. We define the function c(i, l) = P vj∈Ni∩V δvj,l giving the number of neighbors for node Vi with label l, where δ is the Kronecker delta. We incorporate these citation count values, one for the supporting Algorithm 1 Iterative Classification Algorithm for each node Vi ∈V do {bootstrapping} compute ⃗ai using only local attributes of node vi ←f(⃗ai) end for repeat {iterative classification} randomly generate ordering O over nodes in V for each node Vi ∈O do {compute new estimate of vi} compute ⃗ai using current assignments to Ni vi ←f(⃗ai) end for until labels have stabilized or maximum iterations reached class and one for the opposing class, obtaining a new feature vector (u1 i , u2 i , . . . , uj i, c(i, y), c(i, n)) where u1 i , u2 i , . . . , uj i are the elements of ⃗ui, the binary unigram feature vector used by the content-only classifier to represent instance i. Alternatively, we can represent neighbor labels using binary citation presence values where any non-zero count becomes a 1 in the feature vector. Context window: We can adopt a more nuanced model for citation information if we incorporate the citation context window features into the feature vector. This is, in effect, a synthesis of the content-only and citation feature models. Context window features come from the product space L × C, where C is the set of unigrams used in citation context windows and ⃗ci denotes the context window features for instance i. The new feature vector becomes: (u1 i , u2 i , . . . , uj i, c1 i , c2 i , . . . , ck i ). This approach implements the intuition that speakers indicate their voting intentions by the words they use to refer to speakers whose vote is known. Because neighbor relations are bi-directional the reverse is also true: Speakers indicate other speakers’ voting intentions by the words they use to refer to them. As an example, consider the context window feature AGREE-FOR, indicating the presence of the agree unigram in the citation window I agree with the gentleman from Louisiana, where the label for the gentleman from Louisiana instance is y. This feature will be correctly correlated with the y label. Similarly, if the unigram were disagree the feature would be correlated with the n label. 1511 4 Experiments In this section we compare the performance of our dual-classifier and iterative-classifier approaches. We also evaluate the performance of the three feature models for local classification. All accuracies are given as the percentages of instances correctly classified. Results are macroaveraged using 10 × 10-fold cross validation, i.e. 10 runs of 10-fold cross validation using different randomly assigned data splits. Where quoted, statistical significance has been calculated using a two-tailed paired t-test measured over all 100 pairs with 10 degrees of freedom. See Bouckaert (2003) for an experimental justification for this approach. Note that the results presented in this section are not directly comparable with those reported by Thomas et al. and Bansal et al. because their experiments do not use cross-validation. See Section 4.3 for further discussion of experimental configuration. 4.1 Local Classification We evaluate three models for local classification: citation presence features, citation count features and context window features. In each case the SVM classifier is given feature vectors with both contentonly and citation information, as described in Section 3.4. Table 1 shows that context window performs the best with 89.66% accuracy, approximately 1.5% ahead of citation count and 3.5% ahead of citation presence. All three classifiers significantly improve on the content-only classifier. These relative scores seem reasonable. Knowing the words used in citations of each class is better than knowing the number of citations in each class, and better still than only knowing which classes of citations exist. These results represent an upper-bound for the performance of the iterative classifier, which relies on iteration to produce the reliable information about citations given here by oracle. 4.2 Collective Classification Table 2 shows overall results for the three collective classification algorithms. The iterative classifier was run separately with citation count and context winMethod Accuracy (%) Majority 52.46 Content-only 75.29 Citation presence 85.01 Citation count 88.18 Context window 89.66 Table 1: Local classifier accuracy. All three local classifiers are significant over the in-isolation classifier (p < .001). dow citation features, the two best performing local classification methods, both with a threshold of 30 iterations. Results are shown for connected instances, isolated instances, and all instances. Collective classification techniques can only have an impact on connected instances, so these figures are most important. The figures for all instances show the performance of the classifiers in our real-world task, where both connected and isolated instances need to be classified and the end-user may not distinguish between the two types. Each of the four collective classifiers outperform the minimum-cut benchmark over connected instances, with the iterative classifier (context window) (79.05%) producing the smallest gain of less than 1% and mean-field doing best with a nearly 6% gain (84.13%). All show a statistically significant improvement over the content-only classifier. Mean-field shows a statistically significant improvement over minimum-cut. The dual-classifier approaches based on loopy belief propagation and mean-field do better than the iterative-classifier approaches by an average of about 3%. Iterative classification performs slightly better with citation count features than with context window features, despite the fact that the context window model performs better in the local classifier evaluation. We speculate that this may be due to citation count performing better when given incorrect neighbor labels. This is an aspect of local classifier performance we do not otherwise measure, so a clear conclusion is not possible. Given the closeness of the results it is also possible that natural statistical variation is the cause of the difference. 1512 The performance of the minimum-cut method is not reliably enhanced by either the SetTo or IncBy heuristics. Only IncBy(.15) gives a very small improvement (0.14%) over plain minimum-cut. All of the other combinations tried diminished performance slightly. 4.3 A Note on Error Propagation and Experimental Configuration Early in our experimental work we noticed that performance often varied greatly depending on the debates that were allocated to training, tuning and testing. This observation is supported by the per-fold scores that are the basis for the macro-average performance figures reported in Table 2, which tend to have large standard deviations. The absolute standard deviations over the 100 evaluations for the minimum-cut and mean-field methods were 11.19% and 8.94% respectively. These were significantly larger than the standard deviation for the contentonly baseline, which was 7.34%. This leads us to conclude that the performance of collective classification methods is highly variable. Bilgic and Getoor (2008) offer a possible explanation for this. They note that the cost of incorrectly classifying a given instance can be magnified in collective classification, because errors are propagated throughout the network. The extent to which this happens may depend on the random interaction between base classification accuracy and network structure. There is scope for further work to more fully explain this phenomenon. From these statistical and theoretical factors we infer that more reliable conclusions can be drawn from collective classification experiments that use cross-validation instead of a single, fixed data split. 5 Related work Somasundaran et al. (2009) use ICA to improve sentiment polarity classification of dialogue acts in a corpus of multi-party meeting transcripts. Link features are derived from annotations giving frame relations and target relations. Respectively, these relate dialogue acts based on the sentiment expressed and the object towards which the sentiment is expressed. Somasundaran et al. provides another argument for the usefulness of collective classification (specifically ICA), in this case as applied at a dialogue act level and relying on a complex system of annotations for link information. Somasundaran and Wiebe (2009) propose an unsupervised method for classifying the stance of each contribution to an online debate concerning the merits of competing products. Concessions to other stances are modeled, but there are no overt citations in the data that could be used to induce the network structure required for collective classification. Pang and Lee (2005) use metric labeling to perform multi-class collective classification of movie reviews. Metric labeling is a multi-class equivalent of the minimum-cut technique in which optimization is done over a cost function incorporating content-only and citation scores. Links are constructed between test instances and a set of k nearest neighbors drawn only from the training set. Restricting the links in this way means the optimization problem is simple. A similarity metric is used to find nearest neighbors. The Pang and Lee method is an instance of implicit link construction, an approach which is beyond the scope of this paper but nevertheless an important area for future research. A similar technique is used in a variation on the Thomas et al. experiment where additional links between speeches are inferred via a similarity metric (Burfoot, 2008). In cases where both citation and similarity links are present, the overall link score is taken as the sum of the two scores. This seems counter-intuitive, given that the two links are unlikely to be independent. In the framework of this research, the approach would be to train a link meta-classifier to take scores from both link classifiers and output an overall link probability. Within NLP, the use of LBP has not been restricted to document classification. Examples of other applications are dependency parsing (Smith and Eisner, 2008) and alignment (Cromires and Kurohashi, 2009). Conditional random fields (CRFs) are an approach based on Markov random fields that have been popular for segmenting and labeling sequence data (Lafferty et al., 2001). We rejected linear-chain CRFs as a candidate approach for our evaluation on the grounds that the arbitrarily connected graphs used in collective classification can not be fully represented in graphical format, i.e. 1513 Connected Isolated All Majority 52.46 46.29 50.51 Content only 75.31 78.90 76.28 Minimum-cut 78.31 78.90 78.40 Minimum-cut (SetTo(.6)) 78.22 78.90 78.32 Minimum-cut (SetTo(.8)) 78.01 78.90 78.14 Minimum-cut (SetTo(1)) 77.71 78.90 77.93 Minimum-cut (IncBy(.05)) 78.14 78.90 78.25 Minimum-cut (IncBy(.15)) 78.45 78.90 78.46 Minimum-cut (IncBy(.25)) 78.02 78.90 78.15 Iterative-classifier (citation count) 80.07⋆ 78.90 79.69⋆ Iterative-classifier (context window) 79.05 78.90 78.93 Loopy Belief Propagation 83.37† 78.90 81.93† Mean-Field 84.12† 78.90 82.45† Table 2: Speaker classification accuracies (%) over connected, isolated and all instances. The marked results are statistically significant over the content only benchmark (⋆p < .01, † p < .001). The mean-field results are statistically significant over minimum-cut (p < .05). linear-chain CRFs do not scale to the complexity of graphs used in this research. 6 Conclusions and future work By applying alternative models, we have demonstrated the best recorded performance for collective classification of ConVote using bag-of-words features, beating the previous benchmark by nearly 6%. Moreover, each of the three alternative approaches trialed are theoretically superior to the minimum-cut approach approach for three main reasons: (1) they support multi-class classification; (2) they support negative and positive citations; (3) they require no parameter tuning. The superior performance of the dual-classifier approach with loopy belief propagation and meanfield suggests that either algorithm could be considered as a first choice for collective document classification. Their advantage is increased by their ability to output classification confidences as probabilities, while minimum-cut and the local formulations only give absolute class assignments. We do not dismiss the iterative-classifier approach entirely. The most compelling point in its favor is its ability to unify content only and citation features in a single classifier. Conceptually speaking, such an approach should allow the two types of features to inter-relate in more nuanced ways. A case in point comes from our use of a fixed size context window to build a citation classifier. Future approaches may be able to do away with this arbitrary separation of features by training a local classifier to consider all words in terms of their impact on content-only classification and their relations to neighbors. Probabilistic SVM normalization offers a convenient, principled way of incorporating the outputs of an SVM classifier into a collective classifier. An opportunity for future work is to consider normalization approaches for other classifiers. For example, confidence-weighted linear classifiers (Dredze et al., 2008) have been shown to give superior performance to SVMs on a range of tasks and may therefore be a better choice for collective document classification. Of the three models trialled for local classifiers, context window features did best when measured in an oracle experiment, but citation count features did better when used in a collective classifier. We conclude that context window features are a more nuanced and powerful approach that is also more likely to suffer from data sparseness. Citation count features would have been the less effective in a scenario where the fact of the citation existing was less informative, for example, if a citation was 50% likely to indicate agreement rather than 80% likely. There is much scope for further research in this area. 1514 References Alina Andreevskaia and Sabine Bergler. 2008. When specialists and generalists work together: Overcoming domain dependence in sentiment tagging. In ACL, pages 290–298. Mohit Bansal, Claire Cardie, and Lillian Lee. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification framework. In COLING, pages 15–18. Mustafa Bilgic and Lise Getoor. 2008. Effective label acquisition for collective classification. In KDD, pages 43–51. Mustafa Bilgic, Galileo Namata, and Lise Getoor. 2007. Combining collective classification and link prediction. In ICDM Workshops, pages 381–386. IEEE Computer Society. Avrim Blum and Shuchi Chawla. 2001. Learning from labeled and unlabeled data using graph mincuts. In ICML, pages 19–26. Remco R. Bouckaert. 2003. Choosing between two learning algorithms based on calibrated tests. In ICML, pages 51–58. Clint Burfoot and Timothy Baldwin. 2009. Automatic satire detection: Are you having a laugh? In ACLIJCNLP Short Papers, pages 161–164. Clint Burfoot. 2008. Using multiple sources of agreement information for sentiment classification of political transcripts. In Australasian Language Technology Association Workshop 2008, pages 11–18. ALTA. Minh Duc Cao and Xiaoying Gao. 2005. Combining contents and citations for scientific document classification. In 18th Australian Joint Conference on Artificial Intelligence, pages 143–152. Fabien Cromires and Sadao Kurohashi. 2009. An alignment algorithm using belief propagation and a structure-based distortion model. In EACL, pages 166–174. Mark Dredze, Koby Crammer, and Fernando Pereira. 2008. Confidence-weighted linear classification. In ICML, pages 264–271. Zeno Gantner and Lars Schmidt-Thieme. 2009. Automatic content-based categorization of Wikipedia articles. In 2009 Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources, pages 32–37. Michael Jordan, Zoubin Ghahramani, Tommi Jaakkola, Lawrence Saul, and David Heckerman. 1999. An introduction to variational methods for graphical models. Machine Learning, 37:183–233. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289. Shoushan Li and Chengqing Zong. 2008. Multi-domain sentiment classification. In ACL, pages 257–260. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL, pages 115–124. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In EMNLP, pages 79–86. John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In A. Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers, pages 61–74. MIT Press. Prithviraj Sen, Galileo Mark Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. 2008. Collective classification in network data. AI Magazine, 29:93–106. David A. Smith and Jason Eisner. 2008. Dependency parsing by belief propagation. In EMNLP, pages 145– 156. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In ACL-IJCNLP, pages 226–234. Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification. In EMNLP, pages 170–179. Ben Taskar, Pieter Abbeel, and Daphne Koller. 2002. Discriminative probabilistic models for relational data. In UAI. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In EMNLP, pages 327–335. Yiming Yang and Xin Liu. 1999. A re-examination of text categorization methods. In Proceedings ACM SIGIR, pages 42–49. 1515
2011
151
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1516–1525, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Integrating history-length interpolation and classes in language modeling Hinrich Sch¨utze Institute for NLP University of Stuttgart Germany Abstract Building on earlier work that integrates different factors in language modeling, we view (i) backing off to a shorter history and (ii) class-based generalization as two complementary mechanisms of using a larger equivalence class for prediction when the default equivalence class is too small for reliable estimation. This view entails that the classes in a language model should be learned from rare events only and should be preferably applied to rare events. We construct such a model and show that both training on rare events and preferable application to rare events improve perplexity when compared to a simple direct interpolation of class-based with standard language models. 1 Introduction Language models, probability distributions over strings of words, are fundamental to many applications in natural language processing. The main challenge in language modeling is to estimate string probabilities accurately given that even very large training corpora cannot overcome the inherent sparseness of word sequence data. One way to improve the accuracy of estimation is class-based generalization. The idea is that even though a particular word sequence s may not have occurred in the training set (or too infrequently for accurate estimation), the occurrence of sequences similar to s can help us better estimate p(s). Plausible though this line of reasoning is, the language models most commonly used today do not incorporate class-based generalization. This is partially due to the additional cost of creating classes and using classes as part of the model. But an equally important reason is that most models that integrate class-based information do so by way of a simple interpolation and achieve only a modest improvement in performance. In this paper, we propose a new type of classbased language model. The key novelty is that we recognize that certain probability estimates are hard to improve based on classes. In particular, the best probability estimate for frequent events is often the maximum likelihood estimator and this estimator is hard to improve by using other information sources like classes or word similarity. We therefore design a model that attempts to focus the effect of class-based generalization on rare events. Specifically, we propose to employ the same strategy for this that history-length interpolated (HI) models use. We define HI models as models that interpolate the predictions of different-length histories, e.g., p(w3|w1w2) = λ1(w1w2)p′(w3|w1w2) + λ2(w1w2)p′(w3|w2) + (1 −λ1(w1w2) −λ2(w1w2))p′(w3) where p′ is a simple estimate; in this section, we use p′ = pML, the maximum likelihood estimate, as an example. Jelinek-Mercer (Jelinek and Mercer, 1980) and modified Kneser-Ney (Kneser and Ney, 1995) models are examples of HI models. HI models address the challenge that frequent events are best estimated by a method close to maximum likelihood by selecting appropriate values for the interpolation weights. For example, if w1w2w3 is frequent, then λ1 will be close to 1, thus ensuring that p(w3|w1w2) ≈pML(w3|w1w2) and that the components pML(w3|w2) and pML(w3), which are unhelpful in this case, will only slightly change the reliable estimate pML(w3|w1w2). 1516 The main contribution of this paper is to propose the same mechanism for class language models. In fact, we will use the interpolation weights of a KN model to determine how much weight to give to each component of the interpolation. The difference to a KN model is merely that the lower-order distribution is not the lower-order KN distribution (as in KN), but instead an interpolation of the lower-order KN distribution and a class-based distribution. We will show that this method of integrating history interpolation and classes significantly increases the performance of a language model. Focusing the effect of classes on rare events has another important consequence: if this is the right way of using classes, then they should not be formed based on all events in the training set, but only based on rare events. We show that doing this increases performance. Finally, we introduce a second discounting method into the model that differs from KN. This can be motivated by the fact that with two sources of generalization (history-length and classes) more probability mass should be allocated to these two sources than to the single source used in KN. We propose a polynomial discount and show a significant improvement compared to using KN discounting only. This paper is structured as follows. Section 2 discusses related work. Section 3 reviews the KN model and introduces two models, the DupontRosenfeld model (a “recursive” model) and a toplevel interpolated model, that integrate the KN model (a history interpolation model) with a class model. Section 4 details our experimental setup. Results are presented in Section 5. Based on an analysis of strengths and weaknesses of DupontRosenfeld and top-level interpolated models, we present a new polynomial discounting mechanism that does better than either in Section 6. Section 7 presents our conclusions. 2 Related work A large number of different class-based models have been proposed in the literature. The well-known model by Brown et al. (1992) is a class sequence model, in which p(u|w) is computed as the product of a class transition probability and an emission probability, p(g(u)|g(w))p(u|g(u)), where g(u) is the class of u. Other approaches condition the probability of a class on n-grams of lexical items (as opposed to classes) (Whittaker and Woodland, 2001; Emami and Jelinek, 2005; Uszkoreit and Brants, 2008). In this work, we use the Brown type of model: it is simpler and has fewer parameters. Models that condition classes on lexical n-grams could be extended in a way similar to what we propose here. Classes have been used with good results in a number of applications, e.g., in speech recognition (Yokoyama et al., 2003), sentiment analysis (Wiegand and Klakow, 2008), and question answering (Momtazi and Klakow, 2009). Classes have also been shown to improve the performance of exponential models (Chen, 2009). Our use of classes of lexical n-grams for n > 1 has several precedents in the literature (Suhm and Waibel, 1994; Kuo and Reichl, 1999; Deligne and Sagisaka, 2000; Justo and Torres, 2009). The novelty of our approach is that we integrate phrase-level classes into a KN model. Hierarchical clustering (McMahon and Smith, 1996; Zitouni and Zhou, 2007; Zitouni and Zhou, 2008) has the advantage that the size of the class to be used in a specific context is not fixed, but can be chosen at an optimal level of the hierarchy. There is no reason why our non-hierarchical flat model could not be replaced with a hierarchical model and we would expect this to improve results. The key novelty of our clustering method is that clusters are formed based on rare events in the training corpus. This type of clustering has been applied to other problems before, in particular to unsupervised part-of-speech tagging (Sch¨utze, 1995; Clark, 2003; Reichart et al., 2010). However, the importance of rare events for clustering in language modeling has not been investigated before. Our work is most similar to the lattice-based language models proposed by Dupont and Rosenfeld (1997). Bilmes and Kirchhoff (2003) generalize lattice-based language models further by allowing arbitrary factors in addition to words and classes. We use a special case of lattice-based language models in this paper. Our contributions are that we introduce the novel idea of rare-event clustering into language modeling and that we show that the modified model performs better than a strong word-trigram 1517 symbol denotation P[[w]] P w (sum over all unigrams w) c(wi j) count of wi j n1+(•wi j) # of distinct w occurring before wi j Table 1: Notation used for Kneser-Ney. baseline. 3 Models In this section, we introduce the three models that we compare in our experiments: Kneser-Ney model, Dupont-Rosenfeld model, and top-level interpolation model. 3.1 Kneser-Ney model Our baseline model is the modified Kneser-Ney (KN) trigram model as proposed by Chen and Goodman (1999). We give a comprehensive description of our implementation of KN because the details are important for the integration of the class model given below. We use the notation in Table 1. We estimate pKN on the training set as follows. pKN(w3|w2 1) = c(w3 1) −d′′′(c(w3 1)) P[[w]] c(w2 1w) +γ3(w2 1)pKN(w3|w2) γ3(w2 1) = P[[w]] d′′′(c(w2 1w)) P[[w]] c(w2 1w) pKN(w3|w2) = n1+(•w3 2) −d′′(n1+(•w3 2)) P[[w]] n1+(•w2w) +γ2(w2)pKN(w3) γ2(w2) = P[[w]] d′′(n1+(•w2w)) P[[w]] n1+(•w2w) pKN(w3) = ( n1+(•w3)−d′(n1+(•w3)) P [[w]] n1+(•w) if c(w3) > 0 γ1 if c(w3) = 0 γ1 = P[[w]] d′(n1+(•w)) P[[w]] n1+(•w) The parameters d′, d′′, and d′′′ are the discounts for unigrams, bigrams and trigrams, respectively, as defined by Chen and Goodman (1996, p. 20, (26)). Note that our notation deviates from C&G in that they use the single symbol D1 for the three different values d′(1), d′′(1), and d′′′(1) etc. 3.2 Dupont-Rosenfeld model History-interpolated models attempt to find a good tradeoff between using a maximally informative history for accurate prediction of frequent events and generalization for rare events by using lower-order distributions; they employ this mechanism recursively by progressively shortening the history. The key idea of the improved model we will adopt is that class generalization ought to play the same role in history-interpolated models as the lowerorder distributions: they should improve estimates for unseen and rare events. Following Dupont and Rosenfeld (1997), we implement this idea by linearly interpolating the class-based distribution with the lower order distribution, recursively at each level. For a trigram model, this means that we interpolate pKN(w3|w2) and pB(w3|w1w2) on the first backoff level and pKN(w3) and pB(w3|w2) on the second backoff level, where pB is the (Brown) class model (see Section 4 for details on pB). We call this model pDR for Dupont-Rosenfeld model and define it as follows: pDR(w3|w2 1) = c(w3 1) −d′′′(c(w3 1)) P[[w]] c(w2 1w) + γ3(w2 1)[β1(w2 1)pB(w3|w2 1) +(1 −β1(w2 1))pDR(w3|w2)] pDR(w3|w2) = n1+(•w3 2) −d′′(n1+(•w3 2)) P[[w]] n1+(•w2w) + γ2(w2)[β2(w2)pB(w3|w2) +(1 −β2(w2))pDR(w3)] where βi(v) is equal to a parameter αi if the history (w2 1 or w2) is part of a cluster and 0 otherwise: βi(v) = ( αi if v ∈B2−(i−1) 0 otherwise B1 (resp. B2) is the set of unigram (resp. bigram) histories that is covered by the clusters. We cluster bigram histories and unigram histories separately and write pB(w3|w1w2) for the bigram cluster model and pB(w3|w2) for the unigram cluster model. Clustering and the estimation of these two distributions are described in Section 4. 1518 The unigram distribution of the DupontRosenfeld model is set to the unigram distribution of the KN model: pDR(w) = pKN(w). The model (or family of models) defined by Dupont and Rosenfeld (1997) is more general than our version pDR. Most importantly, it allows a truly parallel backoff whereas in our model the recursive backoff distribution pDR is interpolated with a class distribution pB that is not backed off. We prefer this version because it makes it easier to understand the contribution that unique-event vs. all-event classes make to improved language modeling; the parameters β are a good indicator of this effect. An alternative way of setting up the DupontRosenfeld model would be to interpolate pKN(w3|w1w2) and pB(w3|w1w2) etc – but this is undesirable. The strength of history interpolation is that estimates for frequent events are close to ML, e.g., pKN(share|cents a) ≈pML(share|cents a) for our corpus. An ML estimate is accurate for large counts and we should not interpolate it directly with pB(w3|w1w2). For pDR, the discount d′′′ that is subtracted from c(w1w2w3) is small relative to c(w1w2w3) and therefore pDR ≈pML in this case (exactly as in pKN). 3.3 Top-level interpolation Class-based models are often combined with other models by interpolation, starting with the work by Brown et al. (1992). Since we cluster both unigrams and bigrams, we interpolate three models: pTOP(w3|w1w2) = µ1(w1w2)pB(w3|w1w2) + µ2(w2)pB(w3|w2) + (1 −µ1(w1w2) −µ2(w2))pKN(w3|w1w2) where µ1(w1w2) = λ1 if w1w2 ∈B2 and 0 otherwise, µ2(w2) = λ2 if w2 ∈B1 and 0 otherwise and λ1 and λ2 are parameters. We call this the top-level model pTOP because it interpolates the three models at the top level. Most previous work on class-based model has employed some form of top-level interpolation. 4 Experimental Setup We run experiments on a Wall Street Journal (WSJ) corpus of 50M words, split 8:1:1 into training, validation and test sets. The training set contains 256,873 unique unigrams and 4,494,222 unique bigrams. Unknown words in validation and test sets are mapped to a special unknown word u. We use the SRILM toolkit (Stolcke, 2002) for clustering. An important parameter of the classbased model is size |Bi| of the base set, i.e., the total number of n-grams (or rather i-grams) to be clustered. As part of the experiments we vary |Bi| systematically to investigate the effect of base set size. We cluster unigrams (i = 1) and bigrams (i = 2). For all experiments, |B1| = |B2| (except in cases where |B2| exceeds the number of unigrams, see below). SRILM does not directly support bigram clustering. We therefore represent a bigram as a hyphenated word in bigram clustering; e.g., Pan Am is represented as Pan-Am. The input to the clustering is the vocabulary Bi and the cluster training corpus. For a particular base set size b, the unigram input vocabulary B1 is set to the b most frequent unigrams in the training set and the bigram input vocabulary B2 is set to the b most frequent bigrams in the training set. In this section, we call the WSJ training corpus the raw corpus and the cluster training corpus the cluster corpus to be able to distinguish them. We run four different clusterings for each base set size (except for the large sets, see below). The cluster corpora are constructed as follows. • All-event unigram clustering. The cluster corpus is simply the raw corpus. • All-event bigram clustering. The cluster corpus is constructed as follows. A sentence of the raw corpus that contains s words is included twice, once as a sequence of the ⌊s/2⌋bigrams “w1−w2 w3−w4 w5−w6 . . . ” and once as a sequence of the ⌊(s −1)/2⌋bigrams “w2−w3 w4−w5 w6−w7 . . . ”. • Unique-event unigram clustering. The cluster corpus is the set of all sequences of two unigrams ∈B1 that occur in the raw corpus, one sequence per line. Each sequence occurs only once in this cluster corpus. • Unique-event bigram clustering. The cluster corpus is the set of all sequences of two bigrams ∈B2 that occur in the training corpus, 1519 one sequence per line. Each sequence occurs only once in this cluster corpus. As mentioned above, we need both unigram and bigram clusters because we want to incorporate class-based generalization for histories of lengths 1 and 2. As we will show below this significantly increases performance. Since the focus of this paper is not on clustering algorithms, reformatting the training corpus as described above (as a sequence of hyphenated bigrams) is a simple way of using SRILM for bigram clustering. The unique-event clusterings are motivated by the fact that in the Dupont-Rosenfeld model, frequent events are handled by discounted ML estimates. Classes are only needed in cases where an event was not seen or was not frequent enough in the training set. Consequently, we should form clusters not based on all events in the training corpus, but only on events that are rare – because this is the type of event that classes will then be applied to in prediction. The two unique-event corpora can be thought of as reweighted collections in which each unique event receives the same weight. In practice this means that clustering is mostly influenced by rare events since, on the level of types, most events are rare. As we will see below, rare-event clusterings perform better than all-event clusterings. This is not surprising as the class-based component of the model can only benefit rare events and it is therefore reasonable to estimate this component based on a corpus dominated by rare events. We started experimenting with reweighted corpora because class sizes become very lopsided in regular SRILM clustering as the size of the base set increases. The reason is that the objective function maximizes mutual information. Highly differentiated classes for frequent words contribute substantially to this objective function whereas putting all rare words in a few large clusters does not hurt the objective much. However, our focus is on using clustering for improving prediction for rare events; this means that the objective function is counterproductive when contexts are frequency-weighted as they occur in the corpus. After overweighting rare contexts, the objective function is more in sync with what we use clusters for in our model. pML maximum likelihood pB Brown cluster model pE cluster emission probability pT cluster transition probability pKN KN model pDR Dupont-Rosenfeld model pTOP top-level interpolation pPOLKN KN and polynomial discounting pPOL0 polynomial discounting only Table 2: Key to probability distributions It is important to note that the same intuition underlies unique-event clustering that also motivates using the “unique-event” distributions n1+(•w3 2)/(P n1+(•w2w)) and n1+(•w3)/(P n1+(•w)) for the backoff distributions in KN. Viewed this way, the basic KN model also uses a unique-event corpus (although a different one) for estimating backoff probabilities. In all cases, we set the number of clusters to k = 512. Our main goal in this paper is to compare different ways of setting up history-length/class interpolated models and we do not attempt to optimize k. We settled on a fixed number of k = 512 because Brown et al. (1992) used a total of 1000 classes. 512 unigram classes and 512 bigram classes roughly correspond to this number. We prefer powers of 2 to facilitate efficient storage of cluster ids (one such cluster id must be stored for each unigram and each bigram) and therefore choose k = 512. Clustering was performed on an Opteron 8214 processor and took from several minutes for the smallest base sets to more than a week for the largest set of 400,000 items. To estimate n-gram emission probabilities pE, we first introduce an additional cluster for all unigrams that are not in the base set; emission probabilities are then estimated by maximum likelihood. Cluster transition probabilities pT are computed using addone smoothing. Both pE and pT are estimated on the raw corpus. The two class distributions are then defined as follows: pB(w3|w1w2) = pT(g(w3)|g(w1w2))pE(w3|g(w3)) pB(w3|w2) = pT(g(w3)|g(w2))pE(w3|g(w3)) where g(v) is the class of the uni- or bigram v. 1520 pDR all events unique events |Bi| α1 α2 perp. α1 α2 perp. 1a 1×104 .20 .40 87.42 .2 .4 87.41 2a 2×104 .20 .50 86.97 .2 .5 86.88 3a 3×104 .10 .40 87.14 .2 .5 86.57 4a 4×104 .10 .40 87.22 .3 .5 86.31 5a 5×104 .05 .30 87.54 .3 .6 86.10 6a 6×104 .01 .30 87.71 .3 .6 85.96 pTOP all events unique events |Bi| λ1 λ2 perp. λ1 λ2 perp. 1b 1×104 .020 .03 87.65 .02 .02 87.71 2b 2×104 .030 .04 87.43 .03 .03 87.47 3b 3×104 .020 .03 87.52 .03 .03 87.34 4b 4×104 .010 .04 87.58 .03 .04 87.24 5b 5×104 .003 .03 87.74 .03 .04 87.15 6b 6×104 .000 .02 87.82 .03 .04 87.09 Perplexity of KN model: 88.03 Table 3: Optimal parameters for Dupont-Rosenfeld (left) and top-level (right) models on the validation set and perplexity on the validation set. The two tables compare performance when using a class model trained on all events vs a class model trained on unique events. |B1| = |B2| is the number of unigrams and bigrams in the clusters; e.g., lines 1a and 1b are for models that cluster 10,000 unigrams and 10,000 bigrams. Table 2 is a key to the probability distributions we use. 5 Results Table 3 shows the performance of pDR and pTOP for a range of base set sizes |Bi| and for classes trained on all events and on unique events. Parameters αi and λi are optimized on the validation set. Perplexity is reported for the validation set. All following tables also optimize on the validation set and report results on the validation set. The last table, Table 7, also reports perplexity for the test set. Table 3 confirms previous findings that classes improve language model performance. All models have a perplexity that is lower than KN (88.03). When comparing all-event and unique-event clusterings, a clear tendency is apparent. In all-event clustering, the best performance is reached for |Bi| = 20000: perplexity is 86.97 with this base set size for pDR (line 2a) and 87.43 for pTOP (line 2b). In unique-event clustering, performance keeps improving with larger and larger base sets; the best perplexities are obtained for |Bi| = 60000: 85.96 for pDR and 87.09 for pTOP (lines 6a, 6b). The parameter values also reflect this difference between all-event and unique-event clustering. For unique-event results of pDR, we have α1 ≥.2 and α2 ≥.4 (1a–6a). This indicates that classes and history interpolation are both valuable when the model is backing off. But for all-event clustering, the values of αi decrease: from a peak of .20 and .50 (2a) to .01 and .30 (6a), indicating that with larger base sets, less and less value can be derived from classes. This again is evidence that rare-event clustering is the correct approach: only clusters derived in rareevent clustering receive high weights αi in the interpolation. This effect can also be observed for pTOP: the value of λ1 (the weight of bigrams) is higher for unique-event clustering than for all-event clustering (with the exception of lines 1b&2b). The quality of bigram clusters seems to be low in all-event clustering when the base set becomes too large. Perplexity is generally lower for unique-event clustering than for all-event clustering: this is the case for all values of |Bi| for pDR (1a–6a); and for |Bi| > 20000 for pTOP (3b–6b). Table 4 compares the two models in two different conditions: (i) b-: using unigram clusters only and (ii) b+: using unigram clusters and bigram clusters. For all events, there is no difference in performance. However, for unique events, the model that includes bigrams (b+) does better than the model without bigrams (b-). The effect is larger for pDR than for pTOP because (for unique events) a larger weight for the unigram model (λ2 = .05 instead of λ2 = .04) apparently partially compensates for the missing bigram clusters. Table 3 shows that rare-event models do better than all-event models. Given that training large class models with SRILM on all events would take several weeks or even months, we restrict our direct 1521 pDR pTOP all unique all unique α1 α2 perp. α1 α2 perp. λ1 λ2 perp. λ1 λ2 perp. b.3 87.71 .5 86.62 .02 87.82 .05 87.26 b+ .01 .3 87.71 .3 .6 85.96 0 .02 87.82 .03 .04 87.09 Table 4: Using both unigram and bigram clusters is better than using unigrams only. Results for |Bi| = 60,000. pDR pTOP |Bi| α1 α2 perp. λ1 λ2 perp. 1 6×104 0.3 0.6 85.96 0.03 0.04 87.09 2 1×105 0.3 0.6 85.59 0.04 0.04 86.93 3 2×105 0.3 0.6 85.20 0.05 0.04 86.77 4 4×105 0.3 0.7 85.14 0.05 0.04 86.74 Table 5: Dupont-Rosenfeld and top-level models for |Bi| ∈{60000, 100000, 200000, 400000}. Clustering trained on unique-event corpora. comparison of all-event and rare-event models to |Bi| ≤60, 000 in Tables 3-4 and report only rareevent numbers for |Bi| > 60, 000 in what follows. As we can see in Table 5, the trends observed in Table 3 continue as |Bi| is increased further. For both models, perplexity steadily decreases as |Bi| is increased from 60,000 to 400,000. (Note that for |Bi| = 400000, the actual size of B1 is 256,873 since there are only that many words in the training corpus.) The improvements in perplexity become smaller for larger base set sizes, but it is reassuring to see that the general trend continues for large base set sizes. Our explanation is that the class component is focused on rare events and the items that are being added to the clustering for large base sets are all rare events. The perplexity for pDR is clearly lower than that of pTOP, indicating the superiority of the DupontRosenfeld model.1 1Dupont and Rosenfeld (1997) found a relatively large improvement of the “global” linear interpolation model – ptop in our terminology – compared to the baseline whereas ptop performs less well in our experiments. One possible explanation is that our KN baseline is stronger than the word trigram baseline they used. 6 Polynomial discounting Further comparative analysis of pDR and pTOP revealed that pDR is not uniformly better than pTOP. We found that pTOP does poorly on frequent events. For example, for the history w1w2 = cents a, the continuation w3 = share dominates. pDR deals well with this situation because pDR(w3|w1w2) is the discounted ML estimate, with a discount that is small relative to the 10,768 occurrences of cents a share in the training set. In the pTOP model on the last line in Table 5, the discounted ML estimate is multiplied by 1 −.05 −.04 = .91, which results in a much less accurate estimate of pTOP(share|cents a). In contrast, pTOP does well for productive histories, for which it is likely that a continuation unseen in the training set will occur. An example is the history in the – almost any adjective or noun can follow. There are 6251 different words that (i) occur after in the in the validation set, (ii) did not occur after in the in the training set, and (iii) occurred at least 10 times in the training set. Because their training set unigram frequency is at least 10, they have a good chance of being assigned to a class that captures their distributional behavior well and pB(w3|w1w2) is then likely to be a good estimate. For a history with these properties, it is advantageous to further discount the discounted ML estimates by multiplying them with .91. pTOP then gives the remaining probability mass of .09 to words w3 whose probability would otherwise be underestimated. What we have just described is already partially addressed by the KN model – γ(v) will be relatively large for a productive history like v = in the. However, it looks like the KN discounts are not large enough for productive histories, at least not in a combined history-length/class model. Apparently, when incorporating the strengths of a classbased model into KN, the default discounting mechanism does not reallocate enough probability mass 1522 from high-frequency to low-frequency events. We conclude from this analysis that we need to increase the discount values d for large counts. We could add a constant to d, but one of the basic premises of the KN model, derived from the assumption that n-gram marginals should be equal to relative frequencies, is that the discount is larger for more frequent n-grams although in many implementations of KN only the cases c(w3 1) = 1, c(w3 1) = 2, and c(w3 1) ≥3 are distinguished. This suggests that the ideal discount d(x) in an integrated history-length/class language model should grow monotonically with c(v). The simplest way of implementing this heuristically is a polynomial of form ρxr where ρ and r are parameters. r controls the rate of growth of the discount as a function of x; ρ is a factor that can be scaled for optimal performance. The incorporation of the additional polynomial discount into KN is straightforward. We use a discount function e(x) that is the sum of d(x) and the polynomial: e(x) = d(x) + ( ρxr for x ≥4 0 otherwise where (e, d) ∈{(e′, d′), (e′′, d′′), (e′′′, d′′′)}. This model is identical to pDR except that d is replaced with e. We call this model pPOLKN. pPOLKN directly implements the insight that, when using class-based generalization, discounts for counts x ≥4 should be larger than they are in KN. We also experiment with a second version of the model: e(x) = ρxr This second model, called pPOL0, is simpler and does not use KN discounts. It allows us to determine whether a polynomial discount by itself (without using KN discounts in addition) is sufficient. Results for the two models are shown in Table 6 and compared with the two best models from Table 5, for |Bi| = 400,000, classes trained on unique events. pPOLKN and pPOL0 achieve a small improvement in perplexity when compared to pDR (line 3&4 vs 2). This shows that using discounts that are larger than KN discounts for large counts is potentially advantageous. α1/λ1 α2/λ2 ρ r perp. 1 pTOP .05 .04 86.74 2 pDR .30 .70 85.14 3 pPOLKN .30 .70 .05 .89 85.01 4 pPOL0 .30 .70 .80 .41 84.98 Table 6: Results for polynomial discounting compared to pDR and pTOP. |Bi| = 400,000, clusters trained on unique events. perplexity tb:l model |Bi| val test 1 3 pKN 88.03 88.28 2 3:6a pDR 6×104 ae b+ 87.71 87.97 3 3:6a pDR 6×104 ue b+ 85.96 86.22 4 3:6b pTOP 6×104 ae b+ 87.82 88.08 5 3:6b pTOP 6×104 ue b+ 87.09 87.35 6 4 pDR 6×104 ae b87.71 87.97 7 4 pDR 6×104 ue b86.62 86.88 8 4 pTOP 6×104 ae b87.82 88.08 9 4 pTOP 6×104 ue b87.26 87.51 10 5:4 pDR 2×105 ue b+ 85.14 85.39 11 5:4 pTOP 2×105 ue b+ 86.74 86.98 12 6:3 pPOLKN 4×105 ue b+ 85.01 85.26 13 6:4 pPOL0 4×105 ue b+ 84.98 85.22 Table 7: Performance of key models on validation and test sets. tb:l = Table and line the validation result is taken from. ae/ue = all-event/unique-event. b- = unigrams only. b+ = bigrams and unigrams. The linear interpolation αp+(1−α)q of two distributions p and q is a form of linear discounting: p is discounted by 1 −α and q by α. See (Katz, 1987; Jelinek, 1990; Ney et al., 1994). It can thus be viewed as polynomial discounting for r = 1. Absolute discounting could be viewed as a form of polynomial discounting for r = 0. We know of no other work that has explored exponents between 0 and 1 and shown that for this type of exponent, one obtains competitive discounts that could be argued to be simpler than more complex discounts like KN discounts. 6.1 Test set performance We report the test set performance of the key models we have developed in this paper in Table 7. The experiments were run with the optimal parameters 1523 on the validation set as reported in the table referenced in column “tb:l”; e.g., on line 2 of Table 7, (α1, α2) = (.01, .3) as reported on line 6a of Table 3. There is an almost constant difference between validation and test set perplexities, ranging from +.2 to +.3, indicating that test set results are consistent with validation set results. To test significance, we assigned the 2.8M positions in the test set to 48 different bins according to the majority part-of-speech tag of the word in the training set.2 We can then compute perplexity for each bin, compare perplexities for different experiments and use the sign test for determining significance. We indicate results that were significant at p < .05 (n = 48, k ≥32 successes) using a star, e.g., 3<∗2 means that test set perplexity on line 3 is significantly lower than test set perplexity on line 2. The main findings on the validation set also hold for the test set: (i) Trained on unique events and with a sufficiently large |Bi|, both pDR and pTOP are better than KN: 10<∗1, 11<∗1. (ii) Training on unique events is better than training on all events: 3<∗2, 5<∗4, 7<∗6, 9<∗8. (iii) For unique events, using bigram and unigram classes gives better results than using unigram classes only: 3<∗7. Not significant: 5 < 9. (iv) The Dupont-Rosenfeld model pDR is better than the top-level model pTOP: 10<∗11. (v) The model POL0 (polynomial discounting) is the best model overall: Not significant: 13 < 12. (vi) Polynomial discounting is significantly better than KN discounting for the Dupont-Rosenfeld model pDR although the absolute difference in perplexity is small: 13<∗10. Overall, pDR and pPOL0 achieve considerable reductions in test set perplexity from 88.28 to 85.39 and 85.22, respectively. The main result of the experiments is that Dupont-Rosenfeld models (which focus on rare events) are better than the standardly used top-level models; and that training classes on unique events is better than training classes on all events. 2Words with a rare majority tag (e.g., FW ‘foreign word’) and unknown words were assigned to a special class OTHER. 7 Conclusion Our hypothesis was that classes are a generalization mechanism for rare events that serves the same function as history-length interpolation and that classes should therefore be (i) primarily trained on rare events and (ii) receive high weight only if it is likely that a rare event will follow and be weighted in a way analogous to the weighting of lower-order distributions in history-length interpolation. We found clear statistically significant evidence for both (i) and (ii). (i) Classes trained on uniqueevent corpora perform better than classes trained on all-event corpora. (ii) The pDR model (which adjusts the interpolation weight given to classes based on the prevalence of nonfrequent events following) is better than top-level model pTOP (which uses a fixed weight for classes). Most previous work on class-based models has employed top-level interpolation. Our results strongly suggest that the DupontRosenfeld model is a superior model. A comparison of Dupont-Rosenfeld and top-level results suggested that the KN discount mechanism does not discount high-frequency events enough. We empirically determined that better discounts are obtained by letting the discount grow as a function of the count of the discounted event and implemented this as polynomial discounting, an arguably simpler way of discounting than Kneser-Ney discounting. The improvement of polynomial discounts vs. KN discounts was small, but statistically significant. In future work, we would like to find a theoretical justification for the surprising fact that polynomial discounting does at least as well as Kneser-Ney discounting. We also would like to look at other backoff mechanisms (in addition to history length and classes) and incorporate them into the model, e.g., similarity and topic. Finally, training classes on unique events is an extreme way of highly weighting rare events. We would like to explore training regimes that lie between unique-event clustering and all-event clustering and upweight rare events less. Acknowledgements. This research was funded by Deutsche Forschungsgemeinschaft (grant SFB 732). We are grateful to Thomas M¨uller, Helmut Schmid and the anonymous reviewers for their helpful comments. 1524 References Jeff Bilmes and Katrin Kirchhoff. 2003. Factored language models and generalized parallel backoff. In HLT-NAACL. Peter F. Brown, Vincent J. Della Pietra, Peter V. de Souza, Jennifer C. Lai, and Robert L. Mercer. 1992. Classbased n-gram models of natural language. Computational Linguistics, 18(4):467–479. Stanley F. Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language modeling. CoRR, cmp-lg/9606011. Stanley F. Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4):359–393. Stanley F. Chen. 2009. Shrinking exponential language models. In HLT/NAACL, pages 468–476. Alexander Clark. 2003. Combining distributional and morphological information for part of speech induction. In EACL, pages 59–66. Sabine Deligne and Yoshinori Sagisaka. 2000. Statistical language modeling with a class-based n-multigram model. Computer Speech & Language, 14(3):261– 279. Pierre Dupont and Ronald Rosenfeld. 1997. Lattice based language models. Technical Report CMU-CS97-173, Carnegie Mellon University. Ahmad Emami and Frederick Jelinek. 2005. Random clustering for language modeling. In ICASSP, volume 1, pages 581–584. Frederick Jelinek and Robert L. Mercer. 1980. Interpolated estimation of Markov source parameters from sparse data. In Edzard S. Gelsema and Laveen N. Kanal, editors, Pattern Recognition in Practice, pages 381–397. North-Holland. Frederick Jelinek. 1990. Self-organized language modeling for speech recognition. In Alex Waibel and KaiFu Lee, editors, Readings in speech recognition, pages 450–506. Morgan Kaufmann. Raquel Justo and M. In´es Torres. 2009. Phrase classes in two-level language models for ASR. Pattern Analysis & Applications, 12(4):427–437. Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, 35(3):400–401. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In ICASSP, volume 1, pages 181–184. Hong-Kwang J. Kuo and Wolfgang Reichl. 1999. Phrase-based language models for speech recognition. In European Conference on Speech Communication and Technology, volume 4, pages 1595–1598. John G. McMahon and Francis J. Smith. 1996. Improving statistical language model performance with automatically generated word hierarchies. Computational Linguistics, 22:217–247. Saeedeh Momtazi and Dietrich Klakow. 2009. A word clustering approach for language model-based sentence retrieval in question answering systems. In ACM Conference on Information and Knowledge Management, pages 1911–1914. Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependencies in stochastic language modelling. Computer Speech and Language, 8:1–38. Roi Reichart, Omri Abend, and Ari Rappoport. 2010. Type level clustering evaluation: new measures and a pos induction case study. In Proceedings of the Fourteenth Conference on Computational Natural Language Learning, pages 77–87. Hinrich Sch¨utze. 1995. Distributional part-of-speech tagging. In EACL 7, pages 141–148. Andreas Stolcke. 2002. SRILM - An extensible language modeling toolkit. In International Conference on Spoken Language Processing, pages 901–904. Bernhard Suhm and Alex Waibel. 1994. Towards better language models for spontaneous speech. In International Conference on Spoken Language Processing, pages 831–834. Jakob Uszkoreit and Thorsten Brants. 2008. Distributed word clustering for large scale class-based language modeling in machine translation. In Annual Meeting of the Association for Computational Linguistics, pages 755–762. E.W.D. Whittaker and P.C. Woodland. 2001. Efficient class-based language modelling for very large vocabularies. In ICASSP, volume 1, pages 545–548. Michael Wiegand and Dietrich Klakow. 2008. Optimizing language models for polarity classification. In ECIR, pages 612–616. T. Yokoyama, T. Shinozaki, K. Iwano, and S. Furui. 2003. Unsupervised class-based language model adaptation for spontaneous speech recognition. In ICASSP, volume 1, pages 236–239. Imed Zitouni and Qiru Zhou. 2007. Linearly interpolated hierarchical n-gram language models for speech recognition engines. In Michael Grimm and Kristian Kroschel, editors, Robust Speech Recognition and Understanding, pages 301–318. I-Tech Education and Publishing. Imed Zitouni and Qiru Zhou. 2008. Hierarchical linear discounting class n-gram language models: A multilevel class hierarchy approach. In International Conference on Acoustics, Speech, and Signal Processing, pages 4917–4920. 1525
2011
152
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1526–1535, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Structural Topic Model for Latent Topical Structure Analysis Hongning Wang, Duo Zhang, ChengXiang Zhai Department of Computer Science University of Illinois at Urbana-Champaign Urbana IL, 61801 USA {wang296, dzhang22, czhai}@cs.uiuc.edu Abstract Topic models have been successfully applied to many document analysis tasks to discover topics embedded in text. However, existing topic models generally cannot capture the latent topical structures in documents. Since languages are intrinsically cohesive and coherent, modeling and discovering latent topical transition structures within documents would be beneficial for many text analysis tasks. In this work, we propose a new topic model, Structural Topic Model, which simultaneously discovers topics and reveals the latent topical structures in text through explicitly modeling topical transitions with a latent first-order Markov chain. Experiment results show that the proposed Structural Topic Model can effectively discover topical structures in text, and the identified structures significantly improve the performance of tasks such as sentence annotation and sentence ordering. 1 Introduction A great amount of effort has recently been made in applying statistical topic models (Hofmann, 1999; Blei et al., 2003) to explore word co-occurrence patterns, i.e. topics, embedded in documents. Topic models have become important building blocks of many interesting applications (see e.g., (Blei and Jordan, 2003; Blei and Lafferty, 2007; Mei et al., 2007; Lu and Zhai, 2008)). In general, topic models can discover word clustering patterns in documents and project each document to a latent topic space formed by such word clusters. However, the topical structure in a document, i.e., the internal dependency between the topics, is generally not captured due to the exchangeability assumption (Blei et al., 2003), i.e., the document generation probabilities are invariant to content permutation. In reality, natural language text rarely consists of isolated, unrelated sentences, but rather collocated, structured and coherent groups of sentences (Hovy, 1993). Ignoring such latent topical structures inside the documents means wasting valuable clues about topics and thus would lead to non-optimal topic modeling. Taking apartment rental advertisements as an example, when people write advertisements for their apartments, it’s natural to first introduce “size” and “address” of the apartment, and then “rent” and “contact”. Few people would talk about “restriction” first. If this kind of topical structures are captured by a topic model, it would not only improve the topic mining results, but, more importantly, also help many other document analysis tasks, such as sentence annotation and sentence ordering. Nevertheless, very few existing topic models attempted to model such structural dependency among topics. The Aspect HMM model introduced in (Blei and Moreno, 2001) combines pLSA (Hofmann, 1999) with HMM (Rabiner, 1989) to perform document segmentation over text streams. However, Aspect HMM separately estimates the topics in the training set and depends on heuristics to infer the transitional relations between topics. The Hidden Topic Markov Model (HTMM) proposed by (Gruber et al., 2007) extends the traditional topic models by assuming words in each sentence share the same topic assignment, and topics transit between adjacent sentences. However, the transitional structures among topics, i.e., how likely one topic would follow another topic, are not captured in this model. 1526 In this paper, we propose a new topic model, named Structural Topic Model (strTM) to model and analyze both latent topics and topical structures in text documents. To do so, strTM assumes: 1) words in a document are either drawn from a content topic or a functional (i.e., background) topic; 2) words in the same sentence share the same content topic; and 3) content topics in the adjacent sentences follow a topic transition that satisfies the first order Markov property. The first assumption distinguishes the semantics of the occurrence of each word in the document, the second requirement confines the unrealistic “bag-of-word” assumption into a tighter unit, and the third assumption exploits the connection between adjacent sentences. To evaluate the usefulness of the identified topical structures by strTM, we applied strTM to the tasks of sentence annotation and sentence ordering, where correctly modeling the document structure is crucial. On the corpus of 8,031 apartment advertisements from craiglist (Grenager et al., 2005) and 1,991 movie reviews from IMDB (Zhuang et al., 2006), strTM achieved encouraging improvement in both tasks compared with the baseline methods that don’t explicitly model the topical structure. The results confirm the necessity of modeling the latent topical structures inside documents, and also demonstrate the advantages of the proposed strTM over existing topic models. 2 Related Work Topic models have been successfully applied to many problems, e.g., sentiment analysis (Mei et al., 2007), document summarization (Lu and Zhai, 2008) and image annotation (Blei and Jordan, 2003). However, in most existing work, the dependency among the topics is loosely governed by the prior topic distribution, e.g., Dirichlet distribution. Some work has attempted to capture the interrelationship among the latent topics. Correlated Topic Model (Blei and Lafferty, 2007) replaces Dirichlet prior with logistic Normal prior for topic distribution in each document in order to capture the correlation between the topics. HMM-LDA (Griffiths et al., 2005) distinguishes the short-range syntactic dependencies from long-range semantic dependencies among the words in each document. But in HMM-LDA, only the latent variables for the syntactic classes are treated as a locally dependent sequence, while latent topics are treated the same as in other topic models. Chen et al. introduced the generalized Mallows model to constrain the latent topic assignments (Chen et al., 2009). In their model, they assume there exists a canonical order among the topics in the collection of related documents and the same topics are forced not to appear in disconnected portions of the topic sequence in one document (sampling without replacement). Our method relaxes this assumption by only postulating transitional dependency between topics in the adjacent sentences (sampling with replacement) and thus potentially allows a topic to appear multiple times in disconnected segments. As discussed in the previous section, HTMM (Gruber et al., 2007) is the most similar model to ours. HTMM models the document structure by assuming words in the same sentence share the same topic assignment and successive sentences are more likely to share the same topic. However, HTMM only loosely models the transition between topics as a binary relation: the same as the previous sentence’s assignment or draw a new one with a certain probability. This simplified coarse modeling of dependency could not fully capture the complex structure across different documents. In contrast, our strTM model explicitly captures the regular topic transitions by postulating the first order Markov property over the topics. Another line of related work is discourse analysis in natural language processing: discourse segmentation (Sun et al., 2007; Galley et al., 2003) splits a document into a linear sequence of multi-paragraph passages, where lexical cohesion is used to link together the textual units; discourse parsing (Soricut and Marcu, 2003; Marcu, 1998) tries to uncover a more sophisticated hierarchical coherence structure from text to represent the entire discourse. One work in this line that shares a similar goal as ours is the content models (Barzilay and Lee, 2004), where an HMM is defined over text spans to perform information ordering and extractive summarization. A deficiency of the content models is that the identification of clusters of text spans is done separately from transition modeling. Our strTM addresses this deficiency by defining a generative process to simultaneously capture the topics and the transitional re1527 lationship among topics: allowing topic modeling and transition modeling to reinforce each other in a principled framework. 3 Structural Topic Model In this section, we formally define the Structural Topic Model (strTM) and discuss how it captures the latent topics and topical structures within the documents simultaneously. From the theory of linguistic analysis (Kamp, 1981), we know that document exhibits internal structures, where structural segments encapsulate semantic units that are closely related. In strTM, we treat a sentence as the basic structure unit, and assume all the words in a sentence share the same topical aspect. Besides, two adjacent segments are assumed to be highly related (capturing cohesion in text); specifically, in strTM we pose a strong transitional dependency assumption among the topics: the choice of topic for each sentence directly depends on the previous sentence’s topic assignment, i.e., first order Markov property. Moveover, taking the insights from HMM-LDA that not all the words are content conveying (some of them may just be a result of syntactic requirement), we introduce a dummy functional topic zB for every sentence in the document. We use this functional topic to capture the document-independent word distribution, i.e., corpus background (Zhai et al., 2004). As a result, in strTM, every sentence is treated as a mixture of content and functional topics. Formally, we assume a corpus consists of D documents with a vocabulary of size V, and there are k content topics embedded in the corpus. In a given document d, there are m sentences and each sentence i has Ni words. We assume the topic transition probability p(z|z′) is drawn from a Multinomial distribution Mul(αz′), and the word emission probability under each topic p(w|z) is drawn from a Multinomial distribution Mul(βz). To get a unified description of the generation process, we add another dummy topic T-START in strTM, which is the initial topic with position “-1” for every document but does not emit any words. In addition, since our functional topic is assumed to occur in all the sentences, we don’t need to model its transition with other content topics. We use a Binomial variable π to control the proportion between content and functional topics in each sentence. Therefore, there are k+1 topic transitions, one for T-START and others for k content topics; and k emission probabilities for the content topics, with an additional one for the functional topic zB (in total k+1 emission probability distributions). Conditioned on the model parameters Θ = (α, β, π), the generative process of a document in strTM can be described as follows: 1. For each sentence si in document d: (a) Draw topic zi from Multinomial distribution conditioned on the previous sentence si−1’s topic assignment zi−1: zi ∼Mul(αzi−1) (b) Draw each word wij in sentence si from the mixture of content topic zi and functional topic zB: wij ∼πp(wij|β, zi)+(1−π)p(wij|β, zB) The joint probability of sentences and topics in one document defined by strTM is thus given by: p(S0, S1, . . . , Sm, z|α, β, π) = m ∏ i=1 p(zi|α, zi−1)p(Si|zi) (1) where the topic to sentence emission probability is defined as: p(Si|zi) = Ni ∏ j=0 [ πp(wij|β, zi) + (1 −π)p(wij|β, zB) ] (2) This process is graphically illustrated in Figure 1. zm z0 …….. wm …….. Nm D K+1 w0 N0 K+1 z1 w1 N1 Tstart Figure 1: Graphical Representation of strTM. From the definition of strTM, we can see that the document structure is characterized by a documentspecific topic chain, and forcing the words in one 1528 sentence to share the same content topic ensures semantic cohesion of the mined topics. Although we do not directly model the topic mixture for each document as the traditional topic models do, the word co-occurrence patterns within the same document are captured by topic propagation through the transitions. This can be easily understood when we write down the posterior probability of the topic assignment for a particular sentence: p(zi|S0, S1, . . . , Sm, Θ) =p(S0, S1, . . . , Sm|zi, Θ)p(zi) p(S0, S1, . . . , Sm) ∝p(S0, S1, . . . , Si, zi) × p(Si+1, Si+2, . . . , Sm|zi) = ∑ zi−1 p(S0, . . . , Si−1, zi−1)p(zi|zi−1)p(Si|zi) × ∑ zi+1 p(Si+1, . . . , Sm|zi+1)p(zi+1|zi) (3) The first part of Eq(3) describes the recursive influence on the choice of topic for the ith sentence from its preceding sentences, while the second part captures how the succeeding sentences affect the current topic assignment. Intuitively, when we need to decide a sentence’s topic, we will look “backward” and “forward” over all the sentences in the document to determine a “suitable” one. In addition, because of the first order Markov property, the local topical dependency gets more emphasis, i.e., they are interacting directly through the transition probabilities p(zi|zi−1) and p(zi+1|zi). And such interaction on sentences farther away would get damped by the multiplication of such probabilities. This result is reasonable, especially in a long document, since neighboring sentences are more likely to cover similar topics than two sentences far apart. 4 Posterior Inference and Parameter Estimation The chain structure in strTM enables us to perform exact inference: posterior distribution can be efficiently calculated by the forward-backward algorithm, the optimal topic sequence can be inferred using the Viterbi algorithm, and parameter estimation can be solved by the Expectation Maximization (EM) algorithm. More technical details can be found in (Rabiner, 1989). In this section, we only discuss strTM-specific procedures. In the E-Step of EM algorithm, we need to collect the expected count of a sequential topic pair (z, z′) and a topic-word pair (z, w) to update the model parameters α and β in the M-Step. In strTM, E[c(z, z′)] can be easily calculated by forwardbackward algorithm. But we have to go one step further to fetch the required sufficient statistics for E[c(z, w)], because our emission probabilities are defined over sentences. Through forward-backward algorithm, we can get the posterior probability p(si, z|d, Θ). In strTM, words in one sentence are independently drawn from either a specific content topic z or functional topic zB according to the mixture weight π. Therefore, we can accumulate the expected count of (z, w) over all the sentences by: E[c(z, w)] = ∑ d,s∈d πp(w|z)p(s, z|d, Θ)c(w, s) πp(w|z) + (1 −π)p(w|zB) (4) where c(w, s) indicates the frequency of word w in sentence s. Eq(4) can be easily explained as follows. Since we already observe topic z and sentence s cooccur with probability p(s, z|d, Θ), each word w in s should share the same probability of being observed with content topic z. Thus the expected count of c(z, w) in this sentence would be p(s, z|d, Θ)c(w, s). However, since each sentence is also associated with the functional topic zB, the word w may also be drawn from zB. By applying the Bayes’ rule, we can properly reallocate the expected count of c(z, w) by Eq(4). The same strategy can be applied to obtain E[c(zB, w)]. As discussed in (Johnson, 2007), to avoid the problem that EM algorithm tends to assign a uniform word/state distribution to each hidden state, which deviates from the heavily skewed word/state distributions empirically observed, we can apply a Bayesian estimation approach for strTM. Thus we introduce prior distributions over the topic transition Mul(αz′) and emission probabilities Mul(βz), and use the Variational Bayesian (VB) (Jordan et al., 1999) estimator to obtain a model with more skewed word/state distributions. Since both the topic transition and emission probabilities are Multinomial distributions in strTM, the conjugate Dirichlet distribution is the natural 1529 choice for imposing a prior on them (Diaconis and Ylvisaker, 1979). Thus, we further assume: αz ∼Dir(η) (5) βz ∼Dir(γ) (6) where we use exchangeable Dirichlet distributions to control the sparsity of αz and βz. As η and γ approach zero, the prior strongly favors the models in which each hidden state emits as few words/states as possible. In our experiments, we empirically tuned η and γ on different training corpus to optimize loglikelihood. The resulting VB estimation only requires a minor modification to the M-Step in the original EM algorithm: ¯αz = Φ(E[c(z′, z)] + η) Φ(E[c(z)] + kη) (7) ¯βz = Φ(E[c(w, z)] + γ) Φ(E[c(z)] + V γ) (8) where Φ(x) is the exponential of the first derivative of the log-gamma function. The optimal setting of π for the proportion of content topics in the documents is empirically tuned by cross-validation over the training corpus to maximize the log-likelihood. 5 Experimental Results In this section, we demonstrate the effectiveness of strTM in identifying latent topical structures from documents, and quantitatively evaluate how the mined topic transitions can help the tasks of sentence annotation and sentence ordering. 5.1 Data Set We used two different data sets for evaluation: apartment advertisements (Ads) from (Grenager et al., 2005) and movie reviews (Review) from (Zhuang et al., 2006). The Ads data consists of 8,767 advertisements for apartment rentals crawled from Craigslist website. 302 of them have been labeled with 11 fields, including size, feature, address, etc., on the sentence level. The review data contains 2,000 movie reviews discussing 11 different movies from IMDB. These reviews are manually labeled with 12 movie feature labels (We didn’t use the additional opinion annotations in this data set.) , e.g., VP (vision effects), MS (music and sound effects), etc., also on the sentences, but the annotations in the review data set is much sparser than that in the Ads data set (see in Table 1). The sentence-level annotations make it possible to quantitatively evaluate the discovered topic structures. We performed simple preprocessing on these two data sets: 1) removed a standard list of stop words, terms occurring in less than 2 documents; 2) discarded the documents with less than 2 sentences; 3) aggregated sentence-level annotations into document-level labels (binary vector) for each document. Table 1 gives a brief summary on these two data sets after the processing. Ads Review Document Size 8,031 1,991 Vocabulary Size 21,993 14,507 Avg Stn/Doc 8.0 13.9 Avg Labeled Stn/Doc 7.1* 5.1 Avg Token/Stn 14.1 20.0 *Only in 302 labeled ads Table 1: Summary of evaluation data set 5.2 Topic Transition Modeling First, we qualitatively demonstrate the topical structure identified by strTM from Ads data1. We trained strTM with 11 content topics in Ads data set, used word distribution under each class (estimated by maximum likelihood estimator on document-level labels) as priors to initialize the emission probability Mul(βz) in Eq(6), and treated document-level labels as the prior for transition from T-START in each document, so that the mined topics can be aligned with the predefined class labels. Figure 2 shows the identified topics and the transitions among them. To get a clearer view, we discarded the transitions below a threshold of 0.1 and removed all the isolated nodes. From Figure 2, we can find some interesting topical structures. For example, people usually start with “size”, “features” and “address”, and end with “contact” information when they post an apart1Due to the page limit, we only show the result in Ads data set. 1530 TELEPHONE appointment information contact email parking kitchen room laundry storage close shopping transportation bart location http photos click pictures view deposit month lease rent year pets kitchen cat negotiate smoking water garbage included paid utilities NUM bedroom bath room large Figure 2: Estimated topics and topical transitions in Ads data set ment ads. Also, we can discover a strong transition from “size” to “features”. This intuitively makes sense because people usually write “it’s a two bedrooms apartment” first, and then describe other “features” about the apartment. The mined topics are also quite meaningful. For example, “restrictions” are usually put over pets and smoking, and parking and laundry are always the major “features” of an apartment. To further quantitatively evaluate the estimated topic transitions, we used Kullback-Leibler (KL) divergency between the estimated transition matrix and the “ground-truth” transition matrix as the metric. Each element of the “ground-truth” transition matrix was calculated by Eq(9), where c(z, z′) denotes how many sentences annotated by z′ immediately precede one annotated by z. δ is a smoothing factor, and we fixed it to 0.01 in the experiment. ¯p(z|z′) = c(z, z′) + δ c(z) + kδ (9) The KL divergency between two transition matrices is defined in Eq(10). Because we have a k × k transition matrix (Tstart is not included), we calculated the average KL divergency against the groundtruth over all the topics: avgKL= ∑k i=1 KL(p(z|z′ i)||¯p(z|z′ i))+KL(¯p(z|z′ i)||p(z|z′ i)) 2k (10) where ¯p(z|z′) is the ground-truth transition probability estimated by Eq(9), and p(z|z′) is the transition probability given by the model. We used pLSA (Hofmann, 1999), latent permutation model (lPerm) (Chen et al., 2009) and HTMM (Gruber et al., 2007) as the baseline methods for the comparison. Because none of these three methods can generate a topic transition matrix directly, we extended them a little bit to achieve this goal. For pLSA, we used the document-level labels as priors for the topic distribution in each document, so that the estimated topics can be aligned with the predefined class labels. After the topics were estimated, for each sentence we selected the topic that had the highest posterior probability to generate the sentence as its class label. For lPerm and HTMM, we used Kuhn-Munkres algorithm (Lov´asz and Plummer, 1986) to find the optimal topic-to-class alignment based on the sentence-level annotations. After the sentences were annotated with class labels, we estimated the topic transition matrices for all of these three methods by Eq(9). 1531 Since only a small portion of sentences are annotated in the Review data set, very few neighboring sentences are annotated at the same time, which introduces many noisy transitions. As a result, we only performed the comparison on the Ads data set. The “ground-truth” transition matrix was estimated based on all the 302 annotated ads. pLSA+prior lPerm HTMM strTM avgKL 0.743 1.101 0.572 0.372 p-value 0.023 1e-4 0.007 – Table 2: Comparison of estimated topic transitions on Ads data set In Table 2, the p-value was calculated based on ttest of the KL divergency between each topic’s transition probability against strTM. From the results, we can see that avgKL of strTM is smaller than the other three baseline methods, which means the estimated transitional relation by strTM is much closer to the ground-truth transition. This demonstrates that strTM captures the topical structure well, compared with other baseline methods. 5.3 Sentence Annotation In this section, we demonstrate how the identified topical structure can benefit the task of sentence annotation. Sentence annotation is one step beyond the traditional document classification task: in sentence annotation, we want to predict the class label for each sentence in the document, and this will be helpful for other problems, including extractive summarization and passage retrieval. However, the lack of detailed annotations on sentences greatly limits the effectiveness of the supervised classification methods, which have been proved successful on document classifications. In this experiment, we propose to use strTM to address this annotation task. One advantage of strTM is that it captures the topic transitions on the sentence level within documents, which provides a regularization over the adjacent predictions. To examine the effectiveness of such structural regularization, we compared strTM with four baseline methods: pLSA, lPerm, HTMM and Naive Bayes model. The sentence labeling approaches for strTM, pLSA, lPerm and HTMM have been discussed in the previous section. As for Naive Bayes model, we used EM algorithm 2 with both labeled and unlabeled data for the training purpose (we used the same unigram features as in topics models). We set weights for the unlabeled data to be 10−3 in Naive Bayes with EM. The comparison was performed on both data sets. We set the size of topics in each topic model equal to the number of classes in each data set accordingly. To tackle the situation where some sentences in the document are not strictly associated with any classes, we introduced an additional NULL content topic in all the topic models. During the training phase, none of the methods used the sentence-level annotations in the documents, so that we treated the whole corpus as the training and testing set. To evaluate the prediction performance, we calculated accuracy, recall and precision based on the correct predictions over the sentences, and averaged over all the classes as the criterion. Model Accuracy Recall Precison pLSA+prior 0.432 0.649 0.457 lPerm 0.610 0.514 0.471 HTMM 0.606 0.588 0.443 NB+EM 0.528 0.337 0.612 strTM 0.747 0.674 0.620 Table 3: Sentence annotation performance on Ads data set Model Accuracy Recall Precison pLSA+prior 0.342 0.278 0.250 lPerm 0.286 0.205 0.184 HTMM 0.369 0.131 0.149 NB+EM 0.341 0.354 0.431 strTM 0.541 0.398 0.323 Table 4: Sentence annotation performance on Review data set Annotation performance on the two data sets is shown in Table 3 and Table 4. We can see that strTM outperformed all the other baseline methods on most of the metrics: strTM has the best accuracy and recall on both of the two data sets. The improvement confirms our hypothesis that besides solely depending on the local word patterns to perform predic2Mallet package: http://mallet.cs.umass.edu/ 1532 tions, adjacent sentences provide a structural regularization in strTM (see Eq(3)). Compared with lPerm, which postulates a strong constrain over the topic assignment (sampling without replacement), strTM performed much better on both of these two data sets. This validates the benefit of modeling local transitional relation compared with the global ordering. Besides, strTM achieved over 46% accuracy improvement compared with the second best HTMM in the review data set. This result shows the advantage of explicitly modeling the topic transitions between neighbor sentences instead of using a binary relation to do so as in HTMM. To further testify how the identified topical structure can help the sentence annotation task, we first randomly removed 100 annotated ads from the training corpus and used them as the testing set. Then, we used the ground-truth topic transition matrix estimated from the training data to order those 100 ads according to their fitness scores under the groundtruth topic transition matrix, which is defined in Eq(11). We tested the prediction accuracy of different models over two different partitions, top 50 and bottom 50, according to this order. fitness(d) = 1 |d| |d| ∑ i=0 log ¯p(ti|ti−1) (11) where ti is the class label for ith sentence in document d, |d| is the number of sentences in document d, and ¯p(ti|ti−1) is the transition probability estimated by Eq(9). Top 50 p-value Bot 50 p-value pLSA+prior 0.496 4e-12 0.542 0.004 lPerm 0.669 0.003 0.505 8e-4 HTMM 0.683 0.004 0.579 0.003 NB + EM 0.492 1e-12 0.539 0.002 strTM 0.752 – 0.644 – Table 5: Sentence annotation performance according to structural fitness The results are shown in Table 5. From this table, we can find that when the testing documents follow the regular patterns as in the training data, i.e., top 50 group, strTM performs significantly better than the other methods; when the testing documents don’t share such structure, i.e., bottom 50 group, strTM’s performance drops. This comparison confirms that when a testing document shares similar topic structure as the training data, the topical transitions captured by strTM can help the sentence annotation task a lot. In contrast, because pLSA and Naive Bayes don’t depend on the document’s structure, their performance does not change much over these two partitions. 5.4 Sentence Ordering In this experiment, we illustrate how the learned topical structure can help us better arrange sentences in a document. Sentence ordering, or text planning, is essential to many text synthesis applications, including multi-document summarization (Goldstein et al., 2000) and concept-to-text generation (Barzilay and Lapata, 2005). In strTM, we evaluate all the possible orderings of the sentences in a given document and selected the optimal one which gives the highest generation probability: ¯σ(m) = arg max σ(m) ∑ z p(Sσ[0], Sσ[1], . . . , Sσ[m], z|Θ) (12) where σ(m) is a permutation of 1 to m, and σ[i] is the ith element in this permutation. To quantitatively evaluate the ordering result, we treated the original sentence order (OSO) as the perfect order and used Kendall’s τ(σ) (Lapata, 2006) as the evaluation metric to compute the divergency between the optimum ordering given by the model and OSO. Kendall’s τ(σ) is widely used in information retrieval domain to measure the correlation between two ranked lists and it indicates how much an ordering differs from OSO, which ranges from 1 (perfect matching) to -1 (totally mismatching). Since only the HTMM and lPerm take the order of sentences in the document into consideration, we used them as the baselines in this experiment. We ranked OSO together with candidate permutations according to the corresponding model’s generation probability. However, when the size of documents becomes larger, it’s infeasible to permutate all the orderings, therefore we randomly permutated 200 possible orderings of sentences as candidates when there were more than 200 possible candidates. The 1533 2bedroom 1bath in very nice complex! Pool, carport, laundry facilities!! Call Don (650)2075769 to see! Great location!! Also available, 2bed.2bath for $1275 in same complex. =⇒ 2bedroom 1bath in very nice complex! Pool, carport, laundry facilities!! Great location!! Also available, 2bed.2bath for $1275 in same complex. Call Don (650)207-5769 to see! 2 bedrooms 1 bath + a famyly room in a cul-desac location. Please drive by and call Marilyn for appointment 650-652-5806. Address: 517 Price Way, Vallejo. No Pets Please! =⇒ 2 bedrooms 1 bath + a famyly room in a cul-desac location. Address: 517 Price Way, Vallejo. No Pets Please! Please drive by and call Marilyn for appointment 650-652-5806. Table 6: Sample results for document ordering by strTM experiment was performed on both data sets with 80% data for training and the other 20% for testing. We calculated the τ(σ) of all these models for each document in the two data sets and visualized the distribution of τ(σ) in each data set with histogram in Figure 3. From the results, we could observe that strTM’s τ(σ) is more skewed towards the positive range (with mean 0.619 in Ads data set and 0.398 in review data set) than lPerm’s results (with mean 0.566 in Ads data set and 0.08 in review data set) and HTMM’s results (with mean 0.332 in Ads data set and 0.286 in review data set). This indicates that strTM better captures the internal structure within the documents. −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 0 100 200 300 400 500 600 700 800 900 τ(σ) # of Documents Ads lPerm HTMM strTM −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 0 20 40 60 80 100 120 140 160 τ(σ) # of Documents Review lPerm HTMM strTM (a) Ads (b) Review Figure 3: Document Ordering Performance in τ(σ). We see that all methods performed better on the Ads data set than the review data set, suggesting that the topical structures are more coherent in the Ads data set than the review data. Indeed, in the Ads data, strTM perfectly recovered 52.9% of the original sentence order. When examining some mismatched results, we found that some of them were due to an “outlier” order given by the original document (in comparison to the “regular” patterns in the set). In Table 6, we show two such examples where we see the learned structure “suggested” to move the contact information to the end, which intuitively gives us a more regular organization of the ads. It’s hard to say that in this case, the system’s ordering is inferior to that of the original; indeed, the system order is arguably more natural than the original order. 6 Conclusions In this paper, we proposed a new structural topic model (strTM) to identify the latent topical structure in documents. Different from the traditional topic models, in which exchangeability assumption precludes them to capture the structure of a document, strTM captures the topical structure explicitly by introducing transitions among the topics. Experiment results show that both the identified topics and topical structure are intuitive and meaningful, and they are helpful for improving the performance of tasks such as sentence annotation and sentence ordering, where correctly recognizing the document structure is crucial. Besides, strTM is shown to outperform not only the baseline topic models that fail to model the dependency between the topics, but also the semi-supervised Naive Bayes model for the sentence annotation task. Our work can be extended by incorporating richer features, such as named entity and co-reference, to enhance the model’s capability of structure finding. Besides, advanced NLP techniques for document analysis, e.g., shallow parsing, may also be used to further improve structure finding. 7 Acknowledgments We thank the anonymous reviewers for their useful comments. This material is based upon work supported by the National Science Foundation under Grant Numbers IIS-0713581 and CNS-0834709, and NASA grant NNX08AC35A. 1534 References R. Barzilay and M. Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 331–338. R. Barzilay and L. Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of HLT-NAACL, pages 113–120. D.M. Blei and M.I. Jordan. 2003. Modeling annotated data. In Proceedings of the 26th annual international ACM SIGIR conference, pages 127–134. D.M. Blei and J.D. Lafferty. 2007. A correlated topic model of science. The Annals of Applied Statistics, 1(1):17–35. D.M. Blei and P.J. Moreno. 2001. Topic segmentation with an aspect hidden Markov model. In Proceedings of the 24th annual international ACM SIGIR conference, page 348. ACM. D.M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. The Journal of Machine Learning Research, 3(2-3):993 – 1022. H. Chen, SRK Branavan, R. Barzilay, and D.R. Karger. 2009. Global models of document structure using latent permutations. In Proceedings of HLT-NAACL, pages 371–379. P. Diaconis and D. Ylvisaker. 1979. Conjugate priors for exponential families. The Annals of statistics, 7(2):269–281. M. Galley, K. McKeown, E. Fosler-Lussier, and H. Jing. 2003. Discourse segmentation of multi-party conversation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 562–569. J. Goldstein, V. Mittal, J. Carbonell, and M. Kantrowitz. 2000. Multi-document summarization by sentence extraction. In NAACL-ANLP 2000 Workshop on Automatic summarization, pages 40–48. T. Grenager, D. Klein, and C.D. Manning. 2005. Unsupervised learning of field segmentation models for information extraction. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 371–378. T.L. Griffiths, M. Steyvers, D.M. Blei, and J.B. Tenenbaum. 2005. Integrating topics and syntax. Advances in neural information processing systems, 17:537– 544. Amit Gruber, Yair Weiss, and Michal Rosen-Zvi. 2007. Hidden topic markov models. volume 2, pages 163– 170. T. Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pages 50–57. E.H. Hovy. 1993. Automated discourse generation using discourse structure relations. Artificial intelligence, 63(1-2):341–385. M. Johnson. 2007. Why doesn’t EM find good HMM POS-taggers. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 296–305. M.I. Jordan, Z. Ghahramani, T.S. Jaakkola, and L.K. Saul. 1999. An introduction to variational methods for graphical models. Machine learning, 37(2):183– 233. H. Kamp. 1981. A theory of truth and semantic representation. Formal methods in the study of language, 1:277–322. M. Lapata. 2006. Automatic evaluation of information ordering: Kendall’s tau. Computational Linguistics, 32(4):471–484. L. Lov´asz and M.D. Plummer. 1986. Matching theory. Elsevier Science Ltd. Y. Lu and C. Zhai. 2008. Opinion integration through semi-supervised topic modeling. In Proceeding of the 17th international conference on World Wide Web, pages 121–130. Daniel Marcu. 1998. The rhetorical parsing of natural language texts. In ACL ’98, pages 96–103. Q. Mei, X. Ling, M. Wondra, H. Su, and C.X. Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th international conference on World Wide Web, pages 171–180. L.R. Rabiner. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. R. Soricut and D. Marcu. 2003. Sentence level discourse parsing using syntactic and lexical information. In Proceedings of the 2003 Conference of the NAACLHTC, pages 149–156. B. Sun, P. Mitra, C.L. Giles, J. Yen, and H. Zha. 2007. Topic segmentation with shared topic detection and alignment of multiple documents. In Proceedings of the 30th ACM SIGIR, pages 199–206. ChengXiang Zhai, Atulya Velivelli, and Bei Yu. 2004. A cross-collection mixture model for comparative text minning. In Proceeding of the 10th ACM SIGKDD international conference on Knowledge discovery in data mining, pages 743–748. L. Zhuang, F. Jing, and X.Y. Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM international conference on Information and knowledge management, pages 43–50. 1535
2011
153
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1536–1545, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Automatic Labelling of Topic Models Jey Han Lau,♠♥Karl Grieser,♥David Newman,♠♦and Timothy Baldwin♠♥ ♠NICTA Victoria Research Laboratory ♥Dept of Computer Science and Software Engineering, University of Melbourne ♦Dept of Computer Science, University of California Irvine [email protected], [email protected], [email protected], [email protected] Abstract We propose a method for automatically labelling topics learned via LDA topic models. We generate our label candidate set from the top-ranking topic terms, titles of Wikipedia articles containing the top-ranking topic terms, and sub-phrases extracted from the Wikipedia article titles. We rank the label candidates using a combination of association measures and lexical features, optionally fed into a supervised ranking model. Our method is shown to perform strongly over four independent sets of topics, significantly better than a benchmark method. 1 Introduction Topic modelling is an increasingly popular framework for simultaneously soft-clustering terms and documents into a fixed number of “topics”, which take the form of a multinomial distribution over terms in the document collection (Blei et al., 2003). It has been demonstrated to be highly effective in a wide range of tasks, including multidocument summarisation (Haghighi and Vanderwende, 2009), word sense discrimination (Brody and Lapata, 2009), sentiment analysis (Titov and McDonald, 2008), information retrieval (Wei and Croft, 2006) and image labelling (Feng and Lapata, 2010). One standard way of interpreting a topic is to use the marginal probabilities p(wi|tj) associated with each term wi in a given topic tj to extract out the 10 terms with highest marginal probability. This results in term lists such as:1 stock market investor fund trading investment firm exchange companies share 1Here and throughout the paper, we will represent a topic tj via its ranking of top-10 topic terms, based on p(wi|tj). which are clearly associated with the domain of stock market trading. The aim of this research is to automatically generate topic labels which explicitly identify the semantics of the topic, i.e. which take us from a list of terms requiring interpretation to a single label, such as STOCK MARKET TRADING in the above case. The approach proposed in this paper is to first generate a topic label candidate set by: (1) sourcing topic label candidates from Wikipedia by querying with the top-N topic terms; (2) identifying the top-ranked document titles; and (3) further postprocessing the document titles to extract sub-strings. We translate each topic label into features extracted from Wikipedia, lexical association with the topic terms in Wikipedia documents, and also lexical features for the component terms. This is used as the basis of a support vector regression model, which ranks each topic label candidate. Our contributions in this work are: (1) the generation of a novel evaluation framework and dataset for topic label evaluation; (2) the proposal of a method for both generating and scoring topic label candidates; and (3) strong in- and cross-domain results across four independent document collections and associated topic models, demonstrating the ability of our method to automatically label topics with remarkable success. 2 Related Work Topics are conventionally interpreted via their topN terms, ranked based on the marginal probability p(wi|tj) in that topic (Blei et al., 2003; Griffiths and Steyvers, 2004). This entails a significant cognitive load in interpretation, prone to subjectivity. Topics are also sometimes presented with manual post-hoc labelling for ease of interpretation in research publications (Wang and McCallum, 2006; Mei et al., 1536 2006). This has obvious disadvantages in terms of subjectivity, and lack of reproducibility/automation. The closest work to our method is that of Mei et al. (2007), who proposed various unsupervised approaches for automatically labelling topics, based on: (1) generating label candidates by extracting either bigrams or noun chunks from the document collection; and (2) ranking the label candidates based on KL divergence with a given topic. Their proposed methodology generates a generic list of label candidates for all topics using only the document collection. The best method uses bigrams exclusively, in the form of the top-1000 bigrams based on the Student’s t-test. We reimplement their method and present an empirical comparison in Section 5.3. In other work, Magatti et al. (2009) proposed a method for labelling topics induced by a hierarchical topic model. Their label candidate set is the Google Directory (gDir) hierarchy, and label selection takes the form of ontological alignment with gDir. The experiments presented in the paper are highly preliminary, although the results certainly show promise. However, the method is only applicable to a hierarchical topic model and crucially relies on a pre-existing ontology and the class labels contained therein. Pantel and Ravichandran (2004) addressed the more specific task of labelling a semantic class by applying Hearst-style lexico-semantic patterns to each member of that class. When presented with semantically homogeneous, fine-grained nearsynonym clusters, the method appears to work well. With topic modelling, however, the top-ranking topic terms tended to be associated and not lexically similar to one another. It is thus highly questionable whether their method could be applied to topic models, but it would certainly be interesting to investigate whether our model could conversely be applied to the labelling of sets of near-synonyms. In recent work, Lau et al. (2010) proposed to approach topic labelling via best term selection, i.e. selecting one of the top-10 topic terms to label the overall topic. While it is often possible to label topics with topic terms (as is the case with the stock market topic above), there are also often cases where topic terms are not appropriate as labels. We reuse a selection of the features proposed by Lau et al. (2010), and return to discuss it in detail in Section 3. While not directly related to topic labelling, Chang et al. (2009) were one of the first to propose human labelling of topic models, in the form of synthetic intruder word and topic detection tasks. In the intruder word task, they include a term w with low marginal probability p(w|t) for topic t into the topN topic terms, and evaluate how well both humans and their model are able to detect the intruder. The potential applications for automatic labelling of topics are many and varied. In document collection visualisation, e.g., the topic model can be used as the basis for generating a two-dimensional representation of the document collection (Newman et al., 2010a). Regions where documents have a high marginal probability p(di|tj) of being associated with a given topic can be explicitly labelled with the learned label, rather than just presented as an unlabelled region, or presented with a dense “term cloud” from the original topic. In topic modelbased selectional preference learning (Ritter et al., 2010; `O S´eaghdha, 2010), the learned topics can be translated into semantic class labels (e.g. DAYS OF THE WEEK), and argument positions for individual predicates can be annotated with those labels for greater interpretability/portability. In dynamic topic models tracking the diachronic evolution of topics in time-sequenced document collections (Blei and Lafferty, 2006), labels can greatly enhance the interpretation of what topics are “trending” at any given point in time. 3 Methodology The task of automatic labelling of topics is a natural progression from the best topic term selection task of Lau et al. (2010). In that work, the authors use a reranking framework to produce a ranking of the top-10 topic terms based on how well each term – in isolation – represents a topic. For example, in our stock market investor fund trading ... topic example, the term trading could be considered as a more representative term of the overall semantics of the topic than the top-ranked topic term stock. While the best term could be used as a topic label, topics are commonly ideas or concepts that are better expressed with multiword terms (for example STOCK MARKET TRADING), or terms that might not be in the top-10 topic terms (for example, COLOURS 1537 would be a good label for a topic of the form red green blue cyan ...). In this paper, we propose a novel method for automatic topic labelling that first generates topic label candidates using English Wikipedia, and then ranks the candidates to select the best topic labels. 3.1 Candidate Generation Given the size and diversity of English Wikipedia, we posit that the vast majority of (coherent) topics or concepts are encapsulated in a Wikipedia article. By making this assumption, the difficult task of generating potential topic labels is transposed to finding relevant Wikipedia articles, and using the title of each article as a topic label candidate. We first use the top-10 topic terms (based on the marginal probabilities from the original topic model) to query Wikipedia, using: (a) Wikipedia’s native search API; and (b) a site-restricted Google search. The combined set of top-8 article titles returned from the two search engines for each topic constitutes the initial set of primary candidates. Next we chunk parse the primary candidates using the OpenNLP chunker,2 and extract out all noun chunks. For each noun chunk, we generate all component n-grams (including the full chunk), out of which we remove all n-grams which are not in themselves article titles in English Wikipedia. For example, if the Wikipedia document title were the single noun chunk United States Constitution, we would generate the bigrams United States and States Constitution, and prune the latter; we would also generate the unigrams United, States and Constitution, all of which exist as Wikipedia articles and are preserved. In this way, an average of 30–40 secondary labels are produced for each topic based on noun chunk ngrams. A good portion of these labels are commonly stopwords or unigrams that are only marginally related to the topic (an artifact of the n-gram generation process). To remove these outlier labels, we use the RACO lexical association method of Grieser et al. (2011). RACO (Related Article Conceptual Overlap) uses Wikipedia’s link structure and category membership to identify the strength of relationship between arti2http://opennlp.sourceforge.net/ cles via their category overlap. The set of categories related to an article is defined as the union of the category membership of all outlinks in that article. The category overlap of two articles (a and b) is the intersection of the related category sets of each article. The formal definition of this measure is as follows: |(∪p∈O(a)C(p)) ∩(∪p∈O(b)C(p))| where O(a) is the set of outlinks from article a, and C(p) is the set of categories of which article p is a member. This is then normalised using Dice’s coefficient to generate a similarity measure. In the instance that a term maps onto multiple Wikipedia articles via a disambiguation page, we return the best RACO score across article pairings for a given term pair. The final score for each secondary label candidate is calculated as the average RACO score with each of the primary label candidates. All secondary labels with an average RACO score of 0.1 and above are added to the label candidate set. Finally, we add the top-5 topic terms to the set of candidates, based on the marginals from the original topic model. Doing this ensures that there are always label candidates for all topics (even if the Wikipedia searches fail), and also allows the possibility of labeling a topic using its own topic terms, which was demonstrated by Lau et al. (2010) to be a baseline source of topic label candidates. 3.2 Candidate Ranking After obtaining the set of topic label candidates, the next step is to rank the candidates to find the best label for each topic. We will first describe the features that we use to represent label candidates. 3.2.1 Features A good label should be strongly associated with the topic terms. To learn the association of a label candidate with the topic terms, we use several lexical association measures: pointwise mutual information (PMI), Student’s t-test, Dice’s coefficient, Pearson’s χ2 test, and the log likelihood ratio (Pecina, 2009). We also include conditional probability and reverse conditional probability measures, based on the work of Lau et al. (2010). To calculate the association measures, we parse the full collection of English Wikipedia articles using a sliding window of width 1538 20, and obtain term frequencies for the label candidates and topic terms. To measure the association between a label candidate and a list of topic terms, we average the scores of the top-10 topic terms. In addition to the association measures, we include two lexical properties of the candidate: the raw number of terms, and the relative number of terms in the label candidate that are top-10 topic terms. We also include a search engine score for each label candidate, which we generate by querying a local copy of English Wikipedia with the top-10 topic terms, using the Zettair search engine (based on BM25 term similarity).3 For a given label candidate, we return the average score for the Wikipedia article(s) associated with it. 3.2.2 Unsupervised and Supervised Ranking Each of the proposed features can be used as the basis for an unsupervised model for label candidate selection, by ranking the label candidates for a given topic and selecting the top-N. Alternatively, they can be combined in a supervised model, by training over topics where we have gold-standard labelling of the label candidates. For the supervised method, we use a support vector regression (SVR) model (Joachims, 2006) over all of the features. 4 Datasets We conducted topic labelling experiments using document collections constructed from four distinct domains/genres, to test the domain/genre independence of our method: BLOGS : 120,000 blog articles dated from August to October 2008 from the Spinn3r blog dataset4 BOOKS : 1,000 English language books from the Internet Archive American Libraries collection NEWS : 29,000 New York Times news articles dated from July to September 1999, from the English Gigaword corpus PUBMED : 77,000 PubMed biomedical abstracts published in June 2010 3http://www.seg.rmit.edu.au/zettair/ 4http://www.icwsm.org/data/ The BLOGS dataset contains blog posts that cover a diverse range of subjects, from product reviews to casual, conversational messages. The BOOKS topics, coming from public-domain out-of-copyright books (with publication dates spanning more than a century), relate to a wide range of topics including furniture, home decoration, religion and art, and have a more historic feel to them. The NEWS topics reflect the types and range of subjects one might expect in news articles such as health, finance, entertainment, and politics. The PUBMED topics frequently contain domain-specific terms and are sharply differentiated from the topics for the other corpora. We are particularly interested in the performance of the method over PUBMED, as it is a highly specialised domain where we may expect lower coverage of appropriate topic labels within Wikipedia. We took a standard approach to topic modelling each of the four document collections: we tokenised, lemmatised and stopped each document,5 and created a vocabulary of terms that occurred at least ten times. From this processed data, we created a bag-of-words representation of each document, and learned topic models with T = 100 topics in each case. To focus our experiments on topics that were relatively more coherent and interpretable, we first used the method of Newman et al. (2010b) to calculate the average PMI-score for each topic, and filtered all topics that had an average PMI-score lower than 0.4. We additionally filtered any topics where less than 5 of the top-10 topic terms are default nominal in Wikipedia.6 The filtering criteria resulted in 45 topics for BLOGS, 38 topics for BOOKS, 60 topics for NEWS, and 85 topics for PUBMED. Manual inspection of the discarded topics indicated that they were predominantly hard-to-label junk topics or mixed topics, with limited utility for document/term clustering. Applying our label candidate generation methodology to these 228 topics produced approximately 6000 labels — an average of 27 labels per topic. 5OpenNLP is used for tokenization, Morpha for lemmatization (Minnen et al., 2001). 6As determined by POS tagging English Wikipedia with OpenNLP, and calculating the coarse-grained POS priors (noun, verb, etc.) for each term. 1539 Figure 1: A screenshot of the topic label evaluation task on Amazon Mechanical Turk. This screen constitutes a Human Intelligence Task (HIT); it contains a topic followed by 10 suggested topic labels, which are to be rated. Note that been would be the stopword label in this example. 4.1 Topic Candidate Labelling To evaluate our methods and train the supervised method, we require gold-standard ratings for the label candidates. To this end, we used Amazon Mechanical Turk to collect annotations for our labels. In our annotation task, each topic was presented in the form of its top-10 terms, followed by 10 suggested labels for the topic. This constitutes a Human Intelligence Task (HIT); annotators are paid based on the number of HITs they have completed. A screenshot of a HIT seen by annotator is presented in Figure 1. In each HIT, annotators were asked to rate the labels based on the following ordinal scale: 3: Very good label; a perfect description of the topic. 2: Reasonable label, but does not completely capture the topic. 1: Label is semantically related to the topic, but would not make a good topic label. 0: Label is completely inappropriate, and unrelated to the topic. To filter annotations from workers who did not perform the task properly or from spammers, we ap1540 Domain Topic Terms Label Candidate Average Rating BLOGS china chinese olympics gold olympic team win beijing medal sport 2008 summer olympics 2.60 BOOKS church arch wall building window gothic nave side vault tower gothic architecture 2.40 NEWS israel peace barak israeli minister palestinian agreement prime leader palestinians israeli-palestinian conflict 2.63 PUBMED cell response immune lymphocyte antigen cytokine t-cell induce receptor immunity immune system 2.36 Table 1: A sample of topics and topic labels, along with the average rating for each label candidate plied a few heuristics to automatically detect these workers. Additionally, we inserted a small number of stopwords as label candidates in each HIT and recorded workers who gave high ratings to these stopwords. Annotations from workers who failed to passed these tests are removed from the final set of gold ratings. Each label candidate was rated in this way by at least 10 annotators, and ratings from annotators who passed the filter were combined by averaging them. A sample of topics, label candidates, and the average rating is presented in Table 1.7 Finally, we train the regression model over all the described features, using the human rating-based ranking. 5 Experiments In this section we present our experimental results for the topic labelling task, based on both the unsupervised and supervised methods, and the methodology of Mei et al. (2007), which we denote MSZ for the remainder of the paper. 5.1 Evaluation We use two basic measures to evaluate the performance of our predictions. Top-1 average rating is the average annotator rating given to the top-ranked system label, and has a maximum value of 3 (where annotators unanimously rated all top-ranked system labels with a 3). This is intended to give a sense of the absolute utility of the top-ranked candidates. The second measure is normalized discounted cumulative gain (nDCG: Jarvelin and Kekalainen (2002), Croft et al. (2009)), computed for the top-1 (nDCG-1), top-3 (nDCG-3) and top-5 ranked system labels (nDCG-5). For a given ordered list of 7The dataset is available for download from http://www.csse.unimelb.edu.au/research/ lt/resources/acl2011-topic/. scores, this measure is based on the difference between the original order, and the order when the list is sorted by score. That is, if items are ranked optimally in descending order of score at position N, nDCG-N is equal to 1. nDCG is a normalised score, and indicates how close the candidate label ranking is to the optimal ranking within the set of annotated candidates, noting that an nDCG-N score of 1 tells us nothing about absolute values of the candidates. This second evaluation measure is thus intended to reflect the relative quality of the ranking, and complements the top-1 average rating. Note that conventional precision- and recall-based evaluation is not appropriate for our task, as each label candidate has a real-valued rating. As a baseline for the task, we use the unsupervised label candidate ranking method based on Pearson’s χ2 test, as it was overwhelmingly found to be the pick of the features for candidate ranking. 5.2 Results for the Supervised Method For the supervised model, we present both indomain results based on 10-fold cross-validation, and cross-domain results where we learn a model from the ratings for the topic model from a given domain, and apply it to a second domain. In each case, we learn an SVR model over the full set of features described in Section 3.2.1. In practical terms, in-domain results make the unreasonable assumption that we have labelled 90% of labels in order to be able to label the remaining 10%, and crossdomain results are thus the more interesting data point in terms of the expected results when applying our method to a novel topic model. It is valuable to compare the two, however, to gauge the relative impact of domain on the results. We present the results for the supervised method in Table 2, including the unsupervised baseline and an upper bound estimate for comparison purposes. The upper bound is calculated by ranking the candi1541 Test Domain Training Top-1 Average Rating nDCG-1 nDCG-3 nDCG-5 All 1◦ 2◦ Top5 BLOGS Baseline (unsupervised) 1.84 1.87 1.75 1.74 0.75 0.77 0.79 In-domain 1.98 1.94 1.95 1.77 0.81 0.82 0.83 Cross-domain: BOOKS 1.88 1.92 1.90 1.77 0.77 0.81 0.83 Cross-domain: NEWS 1.97 1.94 1.92 1.77 0.80 0.83 0.83 Cross-domain: PUBMED 1.95 1.95 1.93 1.82 0.80 0.82 0.83 Upper bound 2.45 2.26 2.29 2.18 1.00 1.00 1.00 BOOKS Baseline (unsupervised) 1.75 1.76 1.70 1.72 0.77 0.77 0.79 In-domain 1.91 1.90 1.83 1.74 0.84 0.81 0.83 Cross-domain: BLOGS 1.82 1.88 1.79 1.71 0.79 0.81 0.82 Cross-domain: NEWS 1.82 1.87 1.80 1.75 0.79 0.81 0.83 Cross-domain: PUBMED 1.87 1.87 1.80 1.73 0.81 0.82 0.83 Upper bound 2.29 2.17 2.15 2.04 1.00 1.00 1.00 NEWS Baseline (unsupervised) 1.96 1.76 1.87 1.70 0.80 0.79 0.78 In-domain 2.02 1.92 1.90 1.82 0.82 0.82 0.84 Cross-domain: BLOGS 2.03 1.92 1.89 1.85 0.83 0.82 0.84 Cross-domain: BOOKS 2.01 1.80 1.93 1.73 0.82 0.82 0.83 Cross-domain: PUBMED 2.01 1.93 1.94 1.80 0.82 0.82 0.83 Upper bound 2.45 2.31 2.33 2.12 1.00 1.00 1.00 PUBMED Baseline (unsupervised) 1.73 1.74 1.68 1.63 0.75 0.77 0.79 In-domain 1.79 1.76 1.74 1.67 0.77 0.82 0.84 Cross-domain: BLOGS 1.80 1.77 1.73 1.69 0.78 0.82 0.84 Cross-domain: BOOKS 1.77 1.70 1.74 1.64 0.77 0.82 0.83 Cross-domain: NEWS 1.79 1.76 1.73 1.65 0.77 0.82 0.84 Upper bound 2.31 2.17 2.22 2.01 1.00 1.00 1.00 Table 2: Supervised results for all domains dates based on the annotated human ratings. The upper bound for top-1 average rating is thus the highest average human rating of all label candidates for a given topic, while the upper bound for the nDCG measures will always be 1. In addition to results for the combined candidate set, we include results for each of the three candidate subsets, namely the primary Wikipedia labels (“1◦”), the secondary Wikipedia labels (“2◦”) and the top-5 topic terms (“Top5”); the nDCG results are over the full candidate set only, as the numbers aren’t directly comparable over the different subsets (due to differences in the number of candidates and the distribution of ratings). Comparing the in-domain and cross-domain results, we observe that they are largely comparable, with the exception of BOOKS, where there is a noticeable drop in both top-1 average rating and nDGC-1 when we use cross-domain training. We see an appreciable drop in scores when we train BOOKS against BLOGS (or vice versa), which we analyse as being due to incompatibility in document content and structure between these two domains. Overall though, the results are very encouraging, and point to the plausibility of using labelled topic models from independent domains to learn the best topic labels for a new domain. Returning to the question of the suitability of label candidates for the highly specialised PUBMED document collection, we first notice that the upper bound top-1 average rating is comparable to the other domains, indicating that our method has been able to extract equivalent-quality label candidates from Wikipedia. The top-1 average ratings of the supervised method are lower than the other domains. We hypothesise that the cause of the drop is that the lexical association measures are trained over highly diverse Wikipedia data rather than biomedical-specific data, and predict that the results would improve if we trained our features over PubMed. The results are uniformly better than the unsupervised baselines for all four corpora, although there is quite a bit of room for improvement relative to the upper bound. To better gauge the quality of these results, we carry out a direct comparison of our proposed method with the best-performing method of MSZ in Section 5.3. 1542 Looking to the top-1 average score results over the different candidate sets, we observe first that the upper bound for the combined candidate set (“All”) is higher than the scores for the candidate subsets in all cases, underlining the complementarity of the different candidate sets. We also observe that the top-5 topic term candidate set is the lowest performer out of the three subsets across all four corpora, in terms of both upper bound and the results for the supervised method. This reinforces our comments about the inferiority of the topic word selection method of Lau et al. (2010) for topic labelling purposes. For NEWS and PUBMED, there is a noticeable difference between the results of the supervised method over the full candidate set and each of the candidate subsets. In contrast, for BOOKS and BLOGS, the results for the primary candidate subset are at times actually higher than those over the full candidate set in most cases (but not for the upper bound). This is due to the larger search space in the full candidate set, and the higher median quality of candidates in the primary candidate set. 5.3 Comparison with MSZ The best performing method out of the suite of approaches proposed by MSZ method exclusively uses bigrams extracted from the document collection, ranked based on Student’s t-test. The potential drawbacks to this approach are: all labels must be bigrams, there must be explicit token instances of a given bigram in the document collection for it to be considered as a label candidate, and furthermore, there must be enough token instances in the document collection for it to have a high t score. To better understand the performance difference of our approach to that of MSZ, we perform direct comparison of our proposed method with the benchmark method of MSZ. 5.3.1 Candidate Ranking First, we compare the candidate ranking methodology of our method with that of MSZ, using the label candidates extracted by the MSZ method. We first extracted the top-2000 bigrams using the N-gram Statistics Package (Banerjee and Pedersen, 2003). We then ranked the bigrams for each topic using the Student’s t-test. We included the top-5 labels generated for each topic by the MSZ method in our Mechanical Turk annotation task, and use the annotations to directly compare the two methods. To measure the performance of candidate ranking between our supervised method and MSZ’s, we re-rank the top-5 labels extracted by MSZ using our SVR methodology (in-domain) and compare the top-1 average rating and nDCG scores. Results are shown in Table 3. We do not include results for the BOOKS domain because the text collection is much larger than the other domains, and the computation for the MSZ relevance score ranking is intractable due to the number of n-grams (a significant shortcoming of the method). Looking at the results for the other domains, it is clear that our ranking system has the upper hand: it consistently outperforms MSZ over every evaluation metric.8 Comparing the top-1 average rating results back to those in Table 2, we observe that for all three domains, the results for MSZ are below those of the unsupervised baseline, and well below those of our supervised method. The nDCG results are more competitive, and the nDCG-3 results are actually higher than our original results in Table 2. It is important to bear in mind, however, that the numbers are in each case relative to a different label candidate set. Additionally, the results in Table 3 are based on only 5 candidates, with a relatively flat gold-standard rating distribution, making it easier to achieve higher nDCG-5 scores. 5.3.2 Candidate Generation The method of MSZ makes the implicit assumption that good bigram labels are discoverable within the document collection. In our method, on the other hand, we (efficiently) access the much larger and variable n-gram length set of English Wikipedia article titles, in addition to the top-5 topic terms. To better understand the differences in label candidate sets, and the relative coverage of the full label candidate set in each case, we conducted another survey where human users were asked to suggest one topic label for each topic presented. The survey consisted, once again, of presenting annotators with a topic, but in this case, we gave them the open task of proposing the ideal label for 8Based on a single ANOVA, the difference in results is statistically significant at the 5% level for BLOGS, and 1% for NEWS and PUBMED. 1543 Test Domain Candidate Ranking Top-1 nDCG-1 nDCG-3 nDCG-5 System Avg. Rating BLOGS MSZ 1.26 0.65 0.76 0.87 SVR 1.41 0.75 0.85 0.92 Upper bound 1.87 1.00 1.00 1.00 NEWS MSZ 1.37 0.73 0.81 0.90 SVR 1.66 0.88 0.90 0.95 Upper bound 1.86 1.00 1.00 1.00 PUBMED MSZ 1.53 0.77 0.85 0.93 SVR 1.73 0.87 0.91 0.96 Upper bound 1.98 1.00 1.00 1.00 Table 3: Comparison of results for our proposed supervised ranking method (SVR) and that of MSZ the topic. In this, we did not enforce any restrictions on the type or size of label (e.g. the number of terms in the label). Of the manually-generated gold-standard labels, approximately 36% were contained in the original document collection, but 60% were Wikipedia article titles. This indicates that our method has greater potential to generate a label of the quality of the ideal proposed by a human in a completely open-ended task. 6 Discussion On the subject of suitability of using Amazon Mechanical Turk for natural language tasks, Snow et al. (2008) demonstrated that the quality of annotation is comparable to that of expert annotators. With that said, the PUBMED topics are still a subject of interest, as these topics often contain biomedical terms which could be difficult for the general populace to annotate. As the number of annotators per topic and the number of annotations per annotator vary, there is no immediate way to calculate the inter-annotator agreement. Instead, we calculated the MAE score for each candidate, which is an average of the absolute difference between an annotator’s rating and the average rating of a candidate, summed across all candidates to get the MAE score for a given corpus. The MAE scores for each corpus are shown in Table 4, noting that a smaller value indicates higher agreement. As the table shows, the agreement for the PUBMED domain is comparable with the other datasets. BLOGS and NEWS have marginally better Corpus MAE BLOGS 0.50 BOOKS 0.56 NEWS 0.52 PUBMED 0.56 Table 4: Average MAE score for label candidate rating over each corpus agreement, almost certainly because of the greater immediacy of the topics, covering everyday areas such as lifestyle and politics. BOOKS topics are occasionally difficult to label due to the breadth of the domain; e.g. consider a topic containing terms extracted from Shakespeare sonnets. 7 Conclusion This paper has presented the task of topic labelling, that is the generation and scoring of labels for a given topic. We generate a set of label candidates from the top-ranking topic terms, titles of Wikipedia articles containing the top-ranking topic terms, and also a filtered set of sub-phrases extracted from the Wikipedia article titles. We rank the label candidates using a combination of association measures, lexical features and an Information Retrieval feature. Our method is shown to perform strongly over four independent sets of topics, and also significantly better than a competitor system. Acknowledgements NICTA is funded by the Australian government as represented by Department of Broadband, Communication and Digital Economy, and the Australian Research Council through the ICT centre of Excellence programme. DN has also been supported by a grant from the Institute of Museum and Library Services, and a Google Research Award. 1544 References S. Banerjee and T. Pedersen. 2003. The design, implementation, and use of the Ngram Statistic Package. In Proceedings of the Fourth International Conference on Intelligent Text Processing and Computational Linguistics, pages 370–381, Mexico City, February. D.M. Blei and J.D. Lafferty. 2006. Dynamic topic models. In ICML 2006. D.M. Blei, A.Y. Ng, and M.I. Jordan. 2003. Latent Dirichlet allocation. JMLR, 3:993–1022. S. Brody and M. Lapata. 2009. Bayesian word sense induction. In EACL 2009, pages 103–111. J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei. 2009. Reading tea leaves: How humans interpret topic models. In NIPS, pages 288–296. B. Croft, D. Metzler, and T. Strohman. 2009. Search Engines: Information Retrieval in Practice. Addison Wesley. Y. Feng and M. Lapata. 2010. Topic models for image annotation and text illustration. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2010), pages 831–839, Los Angeles, USA, June. K. Grieser, T. Baldwin, F. Bohnert, and L. Sonenberg. 2011. Using ontological and document similarity to estimate museum exhibit relatedness. ACM Journal on Computing and Cultural Heritage, 3(3):1–20. T. Griffiths and M. Steyvers. 2004. Finding scientific topics. In PNAS, volume 101, pages 5228–5235. A. Haghighi and L. Vanderwende. 2009. Exploring content models for multi-document summarization. In HLT: NAACL 2009, pages 362–370. K. Jarvelin and J. Kekalainen. 2002. Cumulated gainbased evaluation of IR techniques. ACM Transactions on Information Systems, 20(4). T. Joachims. 2006. Training linear svms in linear time. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD), pages 217–226, New York, NY, USA. ACM. J.H. Lau, D. Newman, S. Karimi, and T. Baldwin. 2010. Best topic word selection for topic labelling. In Coling 2010: Posters, pages 605–613, Beijing, China. D. Magatti, S. Calegari, D. Ciucci, and F. Stella. 2009. Automatic labeling of topics. In ISDA 2009, pages 1227–1232, Pisa, Italy. Q. Mei, C. Liu, H. Su, and C. Zhai. 2006. A probabilistic approach to spatiotemporal theme pattern mining on weblogs. In WWW 2006, pages 533–542. Q. Mei, X. Shen, and C. Zhai. 2007. Automatic labeling of multinomial topic models. In SIGKDD, pages 490– 499. G. Minnen, J. Carroll, and D. Pearce. 2001. Applied morphological processing of English. Journal of Natural Language Processing, 7(3):207–223. D. Newman, T. Baldwin, L. Cavedon, S. Karimi, D. Martinez, and J. Zobel. 2010a. Visualizing document collections and search results using topic mapping. Journal of Web Semantics, 8(2-3):169–175. D. Newman, J.H. Lau, K. Grieser, and T. Baldwin. 2010b. Automatic evaluation of topic coherence. In Proceedings of Human Language Technologies: The 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL HLT 2010), pages 100–108, Los Angeles, USA, June. Association for Computational Linguistics. D. `O S´eaghdha. 2010. Latent variable models of selectional preference. In ACL 2010. P. Pantel and D. Ravichandran. 2004. Automatically labeling semantic classes. In HLT/NAACL-04, pages 321–328. P. Pecina. 2009. Lexical Association Measures: Collocation Extraction. Ph.D. thesis, Charles University. A. Ritter, Mausam, and O. Etzioni. 2010. A latent Dirichlet allocation method for selectional preferences. In ACL 2010. R. Snow, B. O’Connor, D. Jurafsky, and A. Y. Ng. 2008. Cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. In EMNLP ’08: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 254– 263, Morristown, NJ, USA. I. Titov and R. McDonald. 2008. Modeling online reviews with multi-grain topic models. In WWW ’08, pages 111–120. X. Wang and A. McCallum. 2006. Topics over time: A non-Markov continuous-time model of topical trends. In KDD, pages 424–433. S. Wei and W.B. Croft. 2006. LDA-based document models for ad-hoc retrieval. In SIGIR ’06, pages 178– 185. 1545
2011
154
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1546–1555, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Using Bilingual Information for Cross-Language Document Summarization Xiaojun Wan Institute of Compute Science and Technology, Peking University, Beijing 100871, China Key Laboratory of Computational Linguistics (Peking University), MOE, China [email protected] Abstract Cross-language document summarization is defined as the task of producing a summary in a target language (e.g. Chinese) for a set of documents in a source language (e.g. English). Existing methods for addressing this task make use of either the information from the original documents in the source language or the information from the translated documents in the target language. In this study, we propose to use the bilingual information from both the source and translated documents for this task. Two summarization methods (SimFusion and CoRank) are proposed to leverage the bilingual information in the graph-based ranking framework for cross-language summary extraction. Experimental results on the DUC2001 dataset with manually translated reference Chinese summaries show the effectiveness of the proposed methods. 1 Introduction Cross-language document summarization is defined as the task of producing a summary in a different target language for a set of documents in a source language (Wan et al., 2010). In this study, we focus on English-to-Chinese cross-language summarization, which aims to produce Chinese summaries for English document sets. The task is very useful in the field of multilingual information access. For example, it is beneficial for most Chinese readers to quickly browse and understand English news documents or document sets by reading the corresponding Chinese summaries. A few pilot studies have investigated the task in recent years and exiting methods make use of either the information in the source language or the information in the target language after using machine translation. In particular, for the task of English-to-Chinese cross-language summarization, one method is to directly extract English summary sentences based on English features extracted from the English documents, and then automatically translate the English summary sentences into Chinese summary sentences. The other method is to automatically translate the English sentences into Chinese sentences, and then directly extract Chinese summary sentences based on Chinese features. The two methods make use of the information from only one language side. However, it is not very reliable to use only the information in one language, because the machine translation quality is far from satisfactory, and thus the translated Chinese sentences usually contain some errors and noises. For example, the English sentence “Many destroyed power lines are thought to be uninsured, as are trees and shrubs uprooted across a wide area.” is automatically translated into the Chinese sentence “许多破坏电源线被认 为是保险的,因为是连根拔起的树木和灌木, 在广泛的领域。” by using Google Translate1 , but the Chinese sentence contains a few translation errors. Therefore, on the one side, if we rely only on the English-side information to extract Chinese 1 http://translate.google.com/. Note that the translation service is updated frequently and the current translation results may be different from that presented in this paper. 1546 summary sentences, we cannot guarantee that the automatically translated Chinese sentences for salient English sentences are really salient when these sentences may contain many translation errors and other noises. On the other side, if we rely only on the Chinese-side information to extract Chinese summary sentences, we cannot guarantee that the selected sentences are really salient because the features for sentence ranking based on the incorrectly translated sentences are not very reliable, either. In this study, we propose to leverage both the information in the source language and the information in the target language for cross-language document summarization. In particular, we propose two graph-based summarization methods (SimFusion and CoRank) for using both Englishside and Chinese-side information in the task of English-to-Chinese cross-document summarization. The SimFusion method linearly fuses the Englishside similarity and the Chinese-side similarity for measuring Chinese sentence similarity. The CoRank method adopts a co-ranking algorithm to simultaneously rank both English sentences and Chinese sentences by incorporating mutual influences between them. We use the DUC2001 dataset with manually translated reference Chinese summaries for evaluation. Experimental results based on the ROUGE metrics show the effectiveness of the proposed methods. Three important conclusions for this task are summarized below: 1) The Chinese-side information is more beneficial than the English-side information. 2) The Chinese-side information and the English-side information can complement each other. 3) The proposed CoRank method is more reliable and robust than the proposed SimFusion method. The rest of this paper is organized as follows: Section 2 introduces related work. In Section 3, we present our proposed methods. Evaluation results are shown in Section 4. Lastly, we conclude this paper in Section 5. 2 Related Work 2.1 General Document Summarization Document summarization methods can be extraction-based, abstraction-based or hybrid methods. We focus on extraction-based methods in this study, and the methods directly extract summary sentences from a document or document set by ranking the sentences in the document or document set. In the task of single document summarization, various features have been investigated for ranking sentences in a document, including term frequency, sentence position, cue words, stigma words, and topic signature (Luhn 1969; Lin and Hovy, 2000). Machine learning techniques have been used for sentence ranking (Kupiec et al., 1995; Amini and Gallinari, 2002). Litvak et al. (2010) present a language-independent approach for extractive summarization based on the linear optimization of several sentence ranking measures using a genetic algorithm. In recent years, graph-based methods have been proposed for sentence ranking (Erkan and Radev, 2004; Mihalcea and Tarau, 2004). Other methods include mutual reinforcement principle (Zha 2002; Wan et al., 2007). In the task of multi-document summarization, the centroid-based method (Radev et al., 2004) ranks the sentences in a document set based on such features as cluster centroids, position and TFIDF. Machine Learning techniques have also been used for feature combining (Wong et al., 2008). Nenkova and Louis (2008) investigate the influences of input difficulty on summarization performance. Pitler et al. (2010) present a systematic assessment of several diverse classes of metrics designed for automatic evaluation of linguistic quality of multi-document summaries. Celikyilmaz and Hakkani-Tur (2010) formulate extractive summarization as a two-step learning problem by building a generative model for pattern discovery and a regression model for inference. Aker et al. (2010) propose an A* search algorithm to find the best extractive summary up to a given length, and they propose a discriminative training algorithm for directly maximizing the quality of the best summary. Graph-based methods have also been used to rank sentences for multi-document summarization (Mihalcea and Tarau, 2005; Wan and Yang, 2008). 1547 2.2 Cross-Lingual Document Summarization Several pilot studies have investigated the task of cross-language document summarization. The existing methods use only the information in either language side. Two typical translation schemes are document translation or summary translation. The document translation scheme first translates the source documents into the corresponding documents in the target language, and then extracts summary sentences based only on the information on the target side. The summary translation scheme first extracts summary sentences from the source documents based only on the information on the source side, and then translates the summary sentences into the corresponding summary sentences in the target language. For example Leuski et al. (2003) use machine translation for English headline generation for Hindi documents. Lim et al. (2004) propose to generate a Japanese summary by using Korean summarizer. Chalendar et al. (2005) focus on semantic analysis and sentence generation techniques for cross-language summarization. Orasan and Chiorean (2008) propose to produce summaries with the MMR method from Romanian news articles and then automatically translate the summaries into English. Cross language query based summarization has been investigated in (Pingali et al., 2007), where the query and the documents are in different languages. Wan et al. (2010) adopt the summary translation scheme for the task of English-to-Chinese cross-language summarization. They first extract English summary sentences by using English-side features and the machine translation quality factor, and then automatically translate the English summary into Chinese summary. Other related work includes multilingual summarization (Lin et al., 2005; Siddharthan and McKeown, 2005), which aims to create summaries from multiple sources in multiple languages. 3 Our Proposed Methods As mentioned in Section 1, existing methods rely only on one-side information for sentence ranking, which is not very reliable. In order to leveraging both-side information for sentence ranking, we propose the following two methods to incorporate the bilingual information in different ways. 3.1 SimFusion This method uses the English-side information for Chinese sentence ranking in the graph-based framework. The sentence similarities in the two languages are fused in the method. In other words, when we compute the similarity value between two Chinese sentences, the similarity value between the corresponding two English sentences is used by linear fusion. Since sentence similarity evaluation plays a very important role in the graph-based ranking algorithm, this method can leverage bothside information through similarity fusion. Formally, given the Chinese document set Dcn translated from an English document set, let Gcn=(Vcn, Ecn) be an undirected graph to reflect the relationships between the sentences in the Chinese document set. Vcn is the set of vertices and each vertex scn i in Vcn represents a Chinese sentence. Ecn is the set of edges. Each edge ecn ij in Ecn is associated with an affinity weight f(scn i, scn j) between sentences scn i and scn j (i≠j). The weight is computed by linearly combining the similarity value simcosine(scn i, scn j) between the Chinese sentences and the similarity value simcosine(sen i, sen j) between the corresponding English sentences. ) , ( ) 1( ) , ( ) , ( cos cos en j en i ine cn j cn i ine cn j cn i s s sim s s sim s s f ⋅ − + ⋅ = λ λ where sen j and sen i are the source English sentences for scn j and scn i. λ∈[0, 1] is a parameter to control the relative contributions of the two similarity values. The similarity values simcosine(scn i, scn j) and simcosine(sen i, sen j) are computed by using the standard cosine measure. The weight for each term is computed based on the TFIDF formula. For Chinese similarity computation, Chinese word segmentation is performed. Here, we have f(scn i, scn j)=f(scn j, scn i) and let f(scn i, scn i)=0 to avoid self transition. We use an affinity matrix Mcn to describe Gcn with each entry corresponding to the weight of an edge in the graph. Mcn=(Mcn ij)|Vcn|×|Vcn| is defined as Mcn ij=f(scn i,scn j). Then Mcn is normalized to cn M~ to make the sum of each row equal to 1. Based on matrix cn M~ , the saliency score InfoScore(scn i) for sentence scn i can be deduced from those of all other sentences linked with it and it can be formulated in a recursive form as in the PageRank algorithm: 1548 ∑ ≠ − + ⋅ ⋅ = i all j cn ji cn j cn i n M s InfoScore s InfoScore ) 1( ~ ) ( ) ( μ μ where n is the sentence number, i.e. n= |Vcn|. μ is the damping factor usually set to 0.85, as in the PageRank algorithm. For numerical computation of the saliency scores, we can iteratively run the above equation until convergence. For multi-document summarization, some sentences are highly overlapping with each other, and thus we apply the same greedy algorithm in Wan et al. (2006) to penalize the sentences highly overlapping with other highly scored sentences, and finally the salient and novel Chinese sentences are directly selected as summary sentences. 3.2 CoRank This method leverages both the English-side information and the Chinese-side information in a co-ranking way. The source English sentences and the translated Chinese sentences are simultaneously ranked in a unified graph-based algorithm. The saliency of each English sentence relies not only on the English sentences linked with it, but also on the Chinese sentences linked with it. Similarly, the saliency of each Chinese sentence relies not only on the Chinese sentences linked with it, but also on the English sentences linked with it. More specifically, the proposed method is based on the following assumptions: Assumption 1: A Chinese sentence would be salient if it is heavily linked with other salient Chinese sentences; and an English sentence would be salient if it is heavily linked with other salient English sentences. Assumption 2: A Chinese sentence would be salient if it is heavily linked with salient English sentences; and an English sentence would be salient if it is heavily linked with salient Chinese sentences. The first assumption is similar to PageRank which makes use of mutual “recommendations” between the sentences in the same language to rank sentences. The second assumption is similar to HITS if the English sentences and the Chinese sentences are considered as authorities and hubs, respectively. In other words, the proposed method aims to fuse the ideas of PageRank and HITS in a unified framework. The mutual influences between the Chinese sentences and the English sentences are incorporated in the method. Figure 1 gives the graph representation for the method. Three kinds of relationships are exploited: the CN-CN relationships between Chinese sentences, the EN-EN relationships between English sentences, and the EN-CN relationships between English sentences and Chinese sentences. Formally, given an English document set Den and the translated Chinese document set Dcn, let G=(Ven, Vcn, Een, Ecn, Eencn) be an undirected graph to reflect all the three kinds of relationships between the sentences in the two document sets. Ven ={sen i | 1≤i≤n} is the set of English sentences. Vcn={scn i | 1≤i≤n} is the set of Chinese sentences. scn i is the corresponding Chinese sentence translated from sen i. n is the number of the sentences. Een is the edge set to reflect the relationships between the English sentences. Ecn is the edge set to reflect the relationships between the Chinese sentences. Eencn is the edge set to reflect the relationships between the English sentences and the Chinese sentences. Based on the graph representation, we compute the following three affinity matrices to reflect the three kinds of sentence relationships: Figure 1. The three kinds of sentence relationships 1) Mcn=(Mcn ij)n×n: This affinity matrix aims to reflect the relationships between the Chinese sentences. Each entry in the matrix corresponds to the cosine similarity between the two Chinese sentences. ⎪⎩ ⎪⎨ ⎧ ≠ = otherwise , j , if i s s sim M cn j cn i ine cn ij 0 ) , ( cos English Sentences CN-CN EN-EN EN-CN Chinese sentences 1549 Then Mcn is normalized to cn M~ to make the sum of each row equal to 1. 2) Men=(Men i,j)n×n: This affinity matrix aims to reflect the relationships between the English sentences. Each entry in the matrix corresponds to the cosine similarity between the two English sentences. ⎪⎩ ⎪⎨ ⎧ ≠ = otherwise , j , if i s s sim M en j en i ine en ij 0 ) , ( cos Then Men is normalized to en M~ to make the sum of each row equal to 1. 3) Mencn=(Mencn ij)n×n: This affinity matrix aims to reflect the relationships between the English sentences and the Chinese sentences. Each entry Mencn ij in the matrix corresponds to the similarity between the English sentence sen i and the Chinese sentence scn j. It is hard to directly compute the similarity between the sentences in different languages. In this study, the similarity value is computed by fusing the following two similarity values: the cosine similarity between the sentence sen i and the corresponding source English sentence sen j for scn j, and the cosine similarity between the corresponding translated Chinese sentence scn i for sen i and the sentence scn j. We use the geometric mean of the two values as the affinity weight. ) , ( ) , ( cos cos cn j cn i ine en j en i ine encn ij s s sim s s sim M × = Note that we have Mencn ij=Mencn ji and Mencn=(Mencn)T. Then Mencn is normalized to encn M~ to make the sum of each row equal to 1. We use two column vectors u=[u(scn i)]n×1 and v =[v(sen j)]n×1 to denote the saliency scores of the Chinese sentences and the English sentences, respectively. Based on the three kinds of relationships, we can get the following four assumptions: ∑ ∝ j cn j cn ji cn i s u M s u ) ( ~ ) ( ∑ ∝ i en i en ij en j s v M s v ) ( ~ ) ( ∑ ∝ j en j encn ji cn i s v M s u ) ( ~ ) ( ∑ ∝ i cn i encn ij en j s u M s v ) ( ~ ) ( After fusing the above equations, we can obtain the following iterative forms: ∑ ∑ + = j en j encn ji j cn j cn ji cn i s v M β s u M α s u ) ( ~ ) ( ~ ) ( ∑ ∑ + = i cn i encn ij i en i en ij en j s u M β s v M α s v ) ( ~ ) ( ~ ) ( And the matrix form is: v M u M u cn T encn T β α ) ~ ( ) ~ ( + = u M v M v en T encn T β α ) ~ ( ) ~ ( + = where α and β specify the relative contributions to the final saliency scores from the information in the same language and the information in the other language and we have α+β=1. For numerical computation of the saliency scores, we can iteratively run the two equations until convergence. Usually the convergence of the iteration algorithm is achieved when the difference between the scores computed at two successive iterations for any sentences and words falls below a given threshold. In order to guarantee the convergence of the iterative form, u and v are normalized after each iteration. After we get the saliency scores u for the Chinese sentences, we apply the same greedy algorithm for redundancy removing. Finally, a few highly ranked sentences are selected as summary sentences. 4 Experimental Evaluation 4.1 Evaluation Setup There is no benchmark dataset for English-toChinese cross-language document summarization, so we built our evaluation dataset based on the DUC2001 dataset by manually translating the reference summaries. DUC2001 provided 30 English document sets for generic multi-document summarization. The average document number per document set was 10. The sentences in each article have been separated and the sentence information has been stored into files. Three or two generic reference English summaries were provided by NIST annotators for each document set. Three graduate students were employed to manually translate the reference English summaries into reference Chinese summaries. Each student manually translated one third of the reference summaries. It was much easier and more reliable to provide the reference Chinese summaries by manual translation than by manual summarization. 1550 ROUGE-2 Average_F ROUGE-W Average_F ROUGE-L Average_F ROUGE-SU4 Average_F Baseline(EN) 0.03723 0.05566 0.13259 0.07177 Baseline(CN) 0.03805 0.05886 0.13871 0.07474 SimFusion 0.04017 0.06117 0.14362 0.07645 CoRank 0.04282 0.06158 0.14521 0.07805 Table 1: Comparison Results All the English sentences in the document set were automatically translated into Chinese sentences by using Google Translate, and the Stanford Chinese Word Segmenter2 was used for segmenting the Chinese documents and summaries into words. For comparative study, the summary length was limited to five sentences, i.e. each Chinese summary consisted of five sentences. We used the ROUGE-1.5.5 (Lin and Hovy, 2003) toolkit for evaluation, which has been widely adopted by DUC and TAC for automatic summarization evaluation. It measured summary quality by counting overlapping units such as the n-gram, word sequences and word pairs between the candidate summary and the reference summary. We showed three of the ROUGE F-measure scores in the experimental results: ROUGE-2 (bigrambased), ROUGE-W (based on weighted longest common subsequence, weight=1.2), ROUGE-L (based on longest common subsequences), and ROUGE-SU4 (based on skip bigram with a maximum skip distance of 4). Note that the ROUGE toolkit was performed for Chinese summaries after using word segmentation. Two graph-based baselines were used for comparison. Baseline(EN): This baseline adopts the summary translation scheme, and it relies on the English-side information for English sentence ranking. The extracted English summary is finally automatically translated into the corresponding Chinese summary. The same sentence ranking algorithm with the SimFusion method is adopted, and the affinity weight is computed based only on the cosine similarity between English sentences. Baseline(CN): This baseline adopts the document translation scheme, and it relies on the Chinese-side information for Chinese sentence ranking. The Chinese summary sentences are directly extracted from the translated Chinese documents. The same sentence ranking algorithm with the SimFusion method is adopted, and the affinity 2 http://nlp.stanford.edu/software/segmenter.shtml weight is computed based only on the cosine similarity between Chinese sentences. For our proposed methods, the parameter values are empirically set as λ=0.8 and α=0.5. 4.2 Results and Discussion Table 1 shows the comparison results for our proposed methods and the baseline methods. Seen from the tables, Baseline(CN) performs better than Baseline(EN) over all the metrics. The results demonstrate that the Chinese-side information is more beneficial than the English-side information for cross-document summarization, because the summary sentences are finally selected from the Chinese side. Moreover, our proposed two methods can outperform the two baselines over all the metrics. The results demonstrate the effectiveness of using bilingual information for cross-language document summarization. It is noteworthy that the ROUGE scores in the table are not high due to the following two reasons: 1) The use of machine translation may introduce many errors and noises in the peer Chinese summaries; 2) The use of Chinese word segmentation may introduce more noises and mismatches in the ROUGE evaluation based on Chinese words. We can also see that the CoRank method can outperform the SimFusion method over all metrics. The results show that the CoRank method is more suitable for the task by incorporating the bilingual information into a unified ranking framework. In order to show the influence of the value of the combination parameter λ on the performance of the SimFusion method, we present the performance curves over the four metrics in Figures 2 through 5, respectively. In the figures, λ ranges from 0 to 1, and λ=1 means that SimFusion is the same with Baseline(CN), and λ=0 means that only Englishside information is used for Chinese sentence ranking. We can see that when λ is set to a value larger than 0.5, SimFusion can outperform the two baselines over most metrics. The results show that λ can be set in a relatively wide range. Note that 1551 λ>0.5 means that SimFusion relies more on the Chinese-side information than on the English-side information. Therefore, the Chinese-side information is more beneficial than the English-side information. In order to show the influence of the value of the combination parameter α on the performance of the CoRank method, we present the performance curves over the four metrics in Figures 6 through 9, respectively. In the figures, α ranges from 0.1 to 0.9, and a larger value means that the information from the same language side is more relied on, and a smaller value means that the information from the other language side is more relied on. We can see that CoRank can always outperform the two baselines over all metrics with different value of α. The results show that α can be set in a very wide range. We also note that a very large value or a very small value of α can lower the performance values. The results demonstrate that CoRank relies on both the information from the same language side and the information from the other language side for sentence ranking. Therefore, both the Chinese-side information and the English-side information can complement each other, and they are beneficial to the final summarization performance. Comparing Figures 2 through 5 with Figures 6 through 9, we can further see that the CoRank method is more stable and robust than the SimFusion method. The CoRank method can outperform the SimFusion method with most parameter settings. The bilingual information can be better incorporated in the unified ranking framework of the CoRank method. Finally, we show one running example for the document set D59 in the DUC2001 dataset. The four summaries produced by the four methods are listed below: Baseline(EN): 周日的崩溃是24 年来第一次乘客在涉及西 北飞机事故中丧生。有乘客和观察员的报告,这架飞机的右翼引 擎也坠毁前失败。在坠机现场联邦航空局官员表示不会揣测关于 崩溃或在飞机上的发动机评论的原因。美国联邦航空局的记录显 示,除了那些涉及的飞机坠毁,与JT8D 涡轮路段-200 系列发动 机问题的三个共和国在过去四年的航班发生的事件。1988 年7 月,一个联合国的DC-10 坠毁的苏城,艾奥瓦州后,发动机在飞 行中发生外,造成112 人。 Baseline(CN): 第二,在美国历史上最严重的事故是1987 年8 月16 日,坠毁,造成156 人时,美国西北航空公司飞机上 的底特律都市机场起飞时坠毁。据美国联邦航空管理局的纪录, 麦道公司的MD-82 飞机在1985 年和1986 年紧急降落后,在其两 个引擎之一是失去权力。周日的崩溃是24 年来第一次乘客在涉 及西北飞机事故中丧生。今年4 月,国家运输安全委员会敦促美 国联邦航空局后进行一些危险,发动机故障,飞机的一个发动机 的200 系列JT8D 安全调查。目前,机组人员发现了一个黑人师 谁说,他可以引导飞机在附近的人们听到了他们的区域。 SimFusion: 第二,在美国历史上最严重的事故是1987 年8 月16 日,坠毁,造成156 人时,美国西北航空公司飞机上的底 特律都市机场起飞时坠毁。周日的崩溃是24 年来第一次乘客在 涉及西北飞机事故中丧生。在坠机现场联邦航空局官员表示不会 揣测关于崩溃或在飞机上的发动机评论的原因。有乘客和观察员 的报告,这架飞机的右翼引擎也坠毁前失败。据美国联邦航空管 理局的纪录,麦道公司的MD-82 飞机在1985 年和1986 年紧急降 落后,在其两个引擎之一是失去权力。 CoRank : 周日的崩溃是24 年来第一次乘客在涉及西北飞 机事故中丧生。第二,在美国历史上最严重的事故是1987 年8 月16 日,坠毁,造成156 人时,美国西北航空公司飞机上的底 特律都市机场起飞时坠毁。在坠机现场联邦航空局官员表示不会 揣测关于崩溃或在飞机上的发动机评论的原因。最严重的航空事 故不断,在美国是一个在芝加哥的美国航空公司客机1979 年崩 溃。有乘客和观察员的报告,这架飞机的右翼引擎也坠毁前失 败。 5 Conclusion and Future Work In this paper, we propose two methods (SimFusion and CoRank) to address the cross-language document summarization task by leveraging the bilingual information in both the source and target language sides. Evaluation results demonstrate the effectiveness of the proposed methods. The Chinese-side information is validated to be more beneficial than the English-side information, and the CoRank method is more robust than the SimFusion method. In future work, we will investigate to use the machine translation quality factor to further improve the fluency of the Chinese summary, as in Wan et al. (2010). Though our attempt to use GIZA++ for evaluating the similarity between Chinese sentences and English sentences failed, we will exploit more advanced measures based on statistical alignment model for cross-language similarity computation. Acknowledgments This work was supported by NSFC (60873155), Beijing Nova Program (2008B03) and NCET (NCET-08-0006). We thank the three students for translating the reference summaries. We also thank the anonymous reviewers for their useful comments. 1552 0.03 0.032 0.034 0.036 0.038 0.04 0.042 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λ ROUGE-2(F) SimFusion Baseline(EN) Baseline(CN) Figure 2. ROUGE-2(F) vs. λ for SimFusion 0.052 0.053 0.054 0.055 0.056 0.057 0.058 0.059 0.06 0.061 0.062 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λ ROUGE-W(F) SimFusion Baseline(EN) Baseline(CN) Figure 3. ROUGE-W(F) vs. λ for SimFusion 0.125 0.127 0.129 0.131 0.133 0.135 0.137 0.139 0.141 0.143 0.145 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λ ROUGE-L(F) SimFusion Baseline(EN) Baseline(CN) Figure 4. ROUGE-L(F) vs. λ for SimFusion 0.064 0.066 0.068 0.07 0.072 0.074 0.076 0.078 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 λ ROUGE-SU4(F) SimFusion Baseline(EN) Baseline(CN) Figure 5. ROUGE-SU4(F) vs. λ for SimFusion 0.036 0.037 0.038 0.039 0.04 0.041 0.042 0.043 0.044 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 α ROUGE-2(F) CoRank Baseline(EN) Baseline(CN) Figure 6. ROUGE-2(F) vs. α for CoRank 0.055 0.056 0.057 0.058 0.059 0.06 0.061 0.062 0.063 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 α ROUGE-W(F) CoRank Baseline(EN) Baseline(CN) Figure 7. ROUGE-W(F) vs. α for CoRank 0.13 0.132 0.134 0.136 0.138 0.14 0.142 0.144 0.146 0.148 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 α ROUGE-L(F) CoRank Baseline(EN) Baseline(CN) Figure 8. ROUGE-L(F) vs. α for CoRank 0.07 0.071 0.072 0.073 0.074 0.075 0.076 0.077 0.078 0.079 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 α ROUGE-SU4(F) CoRank Baseline(EN) Baseline(CN) Figure 9. ROUGE-SU4(F) vs. α for CoRank 1553 References A. Aker, T. Cohn, and R. Gaizauskas. 2010. Multidocument summarization using A* search and discriminative training. In Proceedings of EMNLP2010. M. R. Amini, P. Gallinari. 2002. The Use of Unlabeled Data to Improve Supervised Learning for Text Summarization. In Proceedings of SIGIR2002. G. de Chalendar, R. Besançon, O. Ferret, G. Grefenstette, and O. Mesnard. 2005. Crosslingual summarization with thematic extraction, syntactic sentence simplification, and bilingual generation. In Workshop on Crossing Barriers in Text Summarization Research, 5th International Conference on Recent Advances in Natural Language Processing (RANLP2005). A. Celikyilmaz and D. Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of ACL2010. G. ErKan, D. R. Radev. LexPageRank. 2004. Prestige in Multi-Document Text Summarization. In Proceedings of EMNLP2004. D. Klein and C. D. Manning. 2002. Fast Exact Inference with a Factored Model for Natural Language Parsing. In Proceedings of NIPS2002. J. Kupiec, J. Pedersen, F. Chen. 1995. A.Trainable Document Summarizer. In Proceedings of SIGIR1995. A. Leuski, C.-Y. Lin, L. Zhou, U. Germann, F. J. Och, E. Hovy. 2003. Cross-lingual C*ST*RD: English access to Hindi information. ACM Transactions on Asian Language Information Processing, 2(3): 245-269. J.-M. Lim, I.-S. Kang, J.-H. Lee. 2004. Multidocument summarization using cross-language texts. In Proceedings of NTCIR-4. C. Y. Lin, E. Hovy. 2000. The Automated Acquisition of Topic Signatures for Text Summarization. In Proceedings of the 17th Conference on Computational Linguistics. C.-Y. Lin and E.H. Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Cooccurrence Statistics. In Proceedings of HLTNAACL -03. C.-Y. Lin, L. Zhou, and E. Hovy. 2005. Multilingual summarization evaluation 2005: automatic evaluation report. In Proceedings of MSE (ACL2005 Workshop). M. Litvak, M. Last, and M. Friedman. 2010. A new approach to improving multilingual summarization using a genetic algorithm. In Proceedings of ACL2010. H. P. Luhn. 1969. The Automatic Creation of literature Abstracts. IBM Journal of Research and Development, 2(2). R. Mihalcea, P. Tarau. 2004. TextRank: Bringing Order into Texts. In Proceedings of EMNLP2004. R. Mihalcea and P. Tarau. 2005. A language independent algorithm for single and multiple document summarization. In Proceedings of IJCNLP-05. A. Nenkova and A. Louis. 2008. Can you summarize this? Identifying correlates of input difficulty for generic multi-document summarization. In Proceedings of ACL-08:HLT. A. Nenkova, R. Passonneau, and K. McKeown. 2007. The Pyramid method: incorporating human content selection variation in summarization evaluation. ACM Transactions on Speech and Language Processing (TSLP), 4(2). C. Orasan, and O. A. Chiorean. 2008. Evaluation of a Crosslingual Romanian-English Multidocument Summariser. In Proceedings of 6th Language Resources and Evaluation Conference (LREC2008). P. Pingali, J. Jagarlamudi and V. Varma. 2007. Experiments in cross language query focused multi-document summarization. In Workshop on Cross Lingual Information Access Addressing the Information Need of Multilingual Societies in IJCAI2007. E. Pitler, A. Louis, and A. Nenkova. 2010. Automatic evaluation of linguistic quality in multidocument summarization. In Proceedings of ACL2010. D. R. Radev, H. Y. Jing, M. Stys and D. Tam. 2004. Centroid-based summarization of multiple documents. Information Processing and Management, 40: 919-938. 1554 A. Siddharthan and K. McKeown. 2005. Improving multilingual summarization: using redundancy in the input to correct MT errors. In Proceedings of HLT/EMNLP-2005. X. Wan, H. Li and J. Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of ACL2010. X. Wan, J. Yang and J. Xiao. 2006. Using crossdocument random walks for topic-focused multi-documetn summarization. In Proceedings of WI2006. X. Wan and J. Yang. 2008. Multi-document summarization using cluster-based link analysis. In Proceedings of SIGIR-08. X. Wan, J. Yang and J. Xiao. 2007. Towards an Iterative Reinforcement Approach for Simultaneous Document Summarization and Keyword Extraction. In Proceedings of ACL2007. K.-F. Wong, M. Wu and W. Li. 2008. Extractive summarization using supervised and semisupervised learning. In Proceedings of COLING-08. H. Y. Zha. 2002. Generic Summarization and Keyphrase Extraction Using Mutual Reinforcement Principle and Sentence Clustering. In Proceedings of SIGIR2002. 1555
2011
155
Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1556–1565, Portland, Oregon, June 19-24, 2011. c⃝2011 Association for Computational Linguistics Exploiting Web-Derived Selectional Preference to Improve Statistical Dependency Parsing Guangyou Zhou, Jun Zhao∗, Kang Liu, and Li Cai National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences 95 Zhongguancun East Road, Beijing 100190, China {gyzhou,jzhao,kliu,lcai}@nlpr.ia.ac.cn Abstract In this paper, we present a novel approach which incorporates the web-derived selectional preferences to improve statistical dependency parsing. Conventional selectional preference learning methods have usually focused on word-to-class relations, e.g., a verb selects as its subject a given nominal class. This paper extends previous work to wordto-word selectional preferences by using webscale data. Experiments show that web-scale data improves statistical dependency parsing, particularly for long dependency relationships. There is no data like more data, performance improves log-linearly with the number of parameters (unique N-grams). More importantly, when operating on new domains, we show that using web-derived selectional preferences is essential for achieving robust performance. 1 Introduction Dependency parsing is the task of building dependency links between words in a sentence, which has recently gained a wide interest in the natural language processing community. With the availability of large-scale annotated corpora such as Penn Treebank (Marcus et al., 1993), it is easy to train a high-performance dependency parser using supervised learning methods. However, current state-of-the-art statistical dependency parsers (McDonald et al., 2005; McDonald and Pereira, 2006; Hall et al., 2006) tend to have ∗Correspondence author: [email protected] lower accuracies for longer dependencies (McDonald and Nivre, 2007). The length of a dependency from word wi to word wj is simply equal to |i −j|. Longer dependencies typically represent the modifier of the root or the main verb, internal dependencies of longer NPs or PP-attachment in a sentence. Figure 1 shows the F1 score1 relative to the dependency length on the development set by using the graph-based dependency parsers (McDonald et al., 2005; McDonald and Pereira, 2006). We note that the parsers provide very good results for adjacent dependencies (96.89% for dependency length =1), while the dependency length increases, the accuracies degrade sharply. These longer dependencies are therefore a major opportunity to improve the overall performance of dependency parsing. Usually, these longer dependencies can be parsed dependent on the specific words involved due to the limited range of features (e.g., a verb and its modifiers). Lexical statistics are therefore needed for resolving ambiguous relationships, yet the lexicalized statistics are sparse and difficult to estimate directly. To solve this problem, some information with different granularity has been investigated. Koo et al. (2008) proposed a semi-supervised dependency parsing by introducing lexical intermediaries at a coarser level than words themselves via a cluster method. This approach, however, ignores the selectional preference for word-to-word interactions, such as head-modifier relationship. Extra resources 1Precision represents the percentage of predicted arcs of length d that are correct, and recall measures the percentage of gold-standard arcs of length d that are correctly predicted. F1 = 2 × precision × recall/(precision + recall) 1556 1 5 10 15 20 25 30 0.7 0.75 0.8 0.85 0.9 0.95 1 Dependency Length F1 Score (%) MST1 MST2 Figure 1: F score relative to dependency length. beyond the annotated corpora are needed to capture the bi-lexical relationship at the word-to-word level. Our purpose in this paper is to exploit webderived selectional preferences to improve the supervised statistical dependency parsing. All of our lexical statistics are derived from two kinds of webscale corpus: one is the web, which is the largest data set that is available for NLP (Keller and Lapata, 2003). Another is a web-scale N-gram corpus, which is a N-gram corpus with N-grams of length 15 (Brants and Franz, 2006), we call it Google V1 in this paper. The idea is very simple: web-scale data have large coverage for word pair acquisition. By leveraging some assistant data, the dependency parsing model can directly utilize the additional information to capture the word-to-word level relationships. We address two natural and related questions which some previous studies leave open: Question I: Is there a benefit in incorporating web-derived selectional preference features for statistical dependency parsing, especially for longer dependencies? Question II: How well do web-derived selectional preferences perform on new domains? For Question I, we systematically assess the value of using web-scale data in state-of-the-art supervised dependency parsers. We compare dependency parsers that include or exclude selectional preference features obtained from web-scale corpus. To the best of our knowledge, none of the existing studies directly address long dependencies of dependency parsing by using web-scale data. Most statistical parsers are highly domain dependent. For example, the parsers trained on WSJ text perform poorly on Brown corpus. Some studies have investigated domain adaptation for parsers (McClosky et al., 2006; Daum´e III, 2007; McClosky et al., 2010). These approaches assume that the parsers know which domain it is used, and that it has access to representative data in that domain. However, in practice, these assumptions are unrealistic in many real applications, such as when processing the heterogeneous genre of web texts. In this paper we incorporate the web-derived selectional preference features to design our parsers for robust opendomain testing. We conduct the experiments on the English Penn Treebank (PTB) (Marcus et al., 1993). The results show that web-derived selectional preference can improve the statistical dependency parsing, particularly for long dependency relationships. More importantly, when operating on new domains, the webderived selectional preference features show great potential for achieving robust performance (Section 4.3). The remainder of this paper is divided as follows. Section 2 gives a brief introduction of dependency parsing. Section 3 describes the web-derived selectional preference features. Experimental evaluation and results are reported in Section 4. Finally, we discuss related work and draw conclusion in Section 5 and Section 6, respectively. 2 Dependency Parsing In dependency parsing, we attempt to build headmodifier (or head-dependent) relations between words in a sentence. The discriminative parser we used in this paper is based on the part-factored model and features of the MSTParser (McDonald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007). The parsing model can be defined as a conditional distribution p(y|x; w) over each projective parse tree y for a particular sentence x, parameterized by a vector w. The probability of a parse tree is p(y|x; w) = 1 Z(x; w)exp { ∑ ρ∈y w · Φ(x, ρ) } (1) where Z(x; w) is the partition function and Φ are part-factored feature functions that include head1557 modifier parts, sibling parts and grandchild parts. Given the training set {(xi, yi)}N i=1, parameter estimation for log-linear models generally resolve around optimization of a regularized conditional log-likelihood objective w∗ = arg minwL(w) where L(w) = −C N ∑ i=1 logp(yi|xi; w) + 1 2||w||2 (2) The parameter C > 0 is a constant dictating the level of regularization in the model. Since objective function L(w) is smooth and convex, which is convenient for standard gradient-based optimization techniques. In this paper we use the dual exponentiated gradient (EG)2 descent, which is a particularly effective optimization algorithm for log-linear models (Collins et al., 2008). 3 Web-Derived Selectional Preference Features In this paper, we employ two different feature sets: a baseline feature set3 which draw upon “normal” information source, such as word forms and part-ofspeech (POS) without including the web-derived selectional preference4 features, a feature set conjoins the baseline features and the web-derived selectional preference features. 3.1 Web-scale resources All of our selectional preference features described in this paper rely on probabilities derived from unlabeled data. To use the largest amount of data possible, we exploit web-scale resources. one is web, Ngram counts are approximated by Google hits. Another we use is Google V1 (Brants and Franz, 2006). This N-gram corpus records how often each unique sequence of words occurs. N-grams appearing 40 2http://groups.csail.mit.edu/nlp/egstra/ 3This kind of feature sets are similar to other feature sets in the literature (McDonald et al., 2005; Carreras, 2007), so we will not attempt to give a exhaustive description. 4Selectional preference tells us which arguments are plausible for a particular predicate, one way to determine the selectional preference is from co-occurrences of predicates and arguments in text (Bergsma et al., 2008). In this paper, the selectional preferences have the same meaning with N-grams, which model the word-to-word relationships, rather than only considering the predicates and arguments relationships. obj det det root obj mod subj Figure 2: An example of a labeled dependency tree. The tree contains a special token “$” which is always the root of the tree. Each arc is directed from head to modifier and has a label describing the function of the attachment. times or more (1 in 25 billion) are kept, and appear in the n-gram tables. All n-grams with lower counts are discarded. Co-occurrence probabilities can be calculated directly from the N-gram counts. 3.2 Web-derived N-gram features 3.2.1 PMI Previous work on noun compounds bracketing has used adjacency model (Resnik, 1993) and dependency model (Lauer, 1995) to compute association statistics between pairs of words. In this paper we generalize the adjacency and dependency models by including the pointwise mutual information (Church and Hanks, 1900) between all pairs of words in the dependency tree: PMI(x, y) = log p(“x y”) p(“x”)p(“y”) (3) where p(“x y”) is the co-occurrence probabilities. When use the Google V1 corpus, this probabilities can be calculated directly from the N-gram counts, while using the Google hits, we send the queries to the search engine Google5 and all the search queries are performed as exact matches by using quotation marks.6 The value of these features is the PMI, if it is defined. If the PMI is undefined, following the work of (Pitler et al., 2010), we include one of two binary features: p(“x y”) = 0 or p(“x”) ∨p(“y”) = 0 Besides, we also consider the trigram features be5http://www.google.com/ 6Google only allows automated querying through the Google Web API, this involves obtaining a license key, which then restricts the number of queries to a daily quota of 1000. However, we obtained a quota of 20,000 queries per day by sending a request to [email protected] for research purposes. 1558 PMI(“hit with”) xi-word=“hit”, xj-word=“with”, PMI(“hit with”) xi-word=“hit”, xj-word=“with”, xj-pos=“IN”, PMI(“hit with”) xi-word=“hit”, xi-pos=“VBD”, xj-word=“with”, PMI(“hit with”) xi-word=“hit”, b-pos=“ball”, xj-word=“with”, PMI(“hit with”) xi-word=“hit”, xj-word=“with”, PMI(“hit with”), dir=R, dist=3 . . . Table 1: An example of the N-gram PMI features and the conjoin features with the baseline. tween the three words in the dependency tree: PMI(x, y, z) = log p(“x y z”) p(“x y”)p(“y z”) (4) This kinds of trigram features, for example in MSTParser, which can directly capture the sibling and grandchild features. We illustrate the PMI features with an example of dependency parsing tree in Figure 2. In deciding the dependency between the main verb hit and its argument headed preposition with, an example of the N-gram PMI features and the conjoin features with the baseline are shown in Table 1. 3.2.2 PP-attachment Propositional phrase (PP) attachment is one of the hardest problems in English dependency parsing. An English sentence consisting of a subject, a verb, and a nominal object followed by a prepositional phrase is often ambiguous. Ambiguity resolution reflects the selectional preference between the verb and noun with their prepositional phrase. For example, considering the following two examples: (1) John hit the ball with the bat. (2) John hit the ball with the red stripe. In sentence (1), the preposition with depends on the main verb hit; but in sentence (2), the prepositional phrase is a noun attribute and the preposition with needs to depends on the word ball. To resolve this kind of ambiguity, there needs to measure the attachment preference. We thus have PP-attachment features that determine the PMI association across the preposition word “IN”7: PMIIN(x, z) = logp(“x IN z”) p(x) (5) 7Here, the preposition word “IN” (e.g., “with”, “in”, . . .) is any token whose part-of-speech is IN N-gram feature templates hw, mw, PMI(hw,mw) hw, ht, mw, PMI(hw,mw) hw, mw, mt, PMI(hw,mw) hw, ht, mw, mt, PMI(hw,mw) . . . hw, mw, sw hw, mw, sw, PMI(hw, mw, sw) hw, mw, gw hw, mw, gw, PMI(hw, mw, gw) Table 2: Examples of N-gram feature templates. Each entry represents a class of indicator for tuples of information. For example, “hw, mw” reprsents a class of indicator features with one feature for each possible combination of head word and modifier word. Abbreviations: hw=head word, ht= head POS. st, gt=likewise for sibling and grandchild. PMIIN(y, z) = logp(“y IN z”) p(y) (6) where the word x and y are usually verb and noun, z is a noun which directly depends on the preposition word “IN”. For example in sentence (1), we would include the features PMIwith(hit, bat) and PMIwith(ball, bat). If both PMI features exist and PMIwith(hit, bat) > PMIwith(ball, bat), indicating to our dependency parsing model that the preposition word with depends on the verb hit is a good choice. While in sentence (2), the features include PMIwith(hit, stripe) and PMIwith(ball, stripe). 3.3 N-gram feature templates We generate N-gram features by mimicking the template structure of the original baseline features. For example, the baseline feature set includes indicators for word-to-word and tag-to-tag interactions between the head and modifier of a dependency. In the N-gram feature set, we correspondingly introduce N-gram PMI for word-to-word interactions. 1559 The N-gram feature set for MSTParser is shown in Table 2. Following McDonald et al. (2005), all features are conjoined with the direction of attachment as well as the distance between the two words creating the dependency. In between N-gram features, we include the form of word trigrams and PMI of the trigrams. The surrounding word N-gram features represent the local context of the selectional preference. Besides, we also present the second-order feature templates, including the sibling and grandchild features. These features are designed to disambiguate cases like coordinating conjunctions and prepositional attachment. Consider the examples we have shown in section 3.2.2, for sentence (1), the dependency graph path feature ball →with →bat should have a lower weight since ball rarely is modified by bat, but is often seen through them (e.g., a higher weight should be associated with hit →with →bat). In contrast, for sentence (2), our N-gram features will tell us that the prepositional phrase is much more likely to attach to the noun since the dependency graph path feature ball →with →stripe should have a high weight due to the high strength of selectional preference between ball and stripe. Web-derived selectional preference features based on PMI values are trickier to incorporate into the dependency parsing model because they are continuous rather than discrete. Since all the baseline features used in the literature (McDonald et al., 2005; Carreras, 2007) take on binary values of 0 or 1, there is a “mis-match” between the continuous and binary features. Log-linear dependency parsing model is sensitive to inappropriately scaled feature. To solve this problem, we transform the PMI values into a more amenable form by replacing the PMI values with their z-score. The z-score of a PMI value x is x−µ σ , where µ and σ are the mean and standard deviation of the PMI distribution, respectively. 4 Experiments In order to evaluate the effectiveness of our proposed approach, we conducted dependency parsing experiments in English. The experiments were performed on the Penn Treebank (PTB) (Marcus et al., 1993), using a standard set of head-selection rules (Yamada and Matsumoto, 2003) to convert the phrase structure syntax of the Treebank into a dependency tree representation, dependency labels were obtained via the ”Malt” hard-coded setting.8 We split the Treebank into a training set (Sections 2-21), a development set (Section 22), and several test sets (Sections 0,9 1, 23, and 24). The part-of-speech tags for the development and test set were automatically assigned by the MXPOST tagger10, where the tagger was trained on the entire training corpus. Web page hits for word pairs and trigrams are obtained using a simple heuristic query to the search engine Google.11 Inflected queries are performed by expanding a bigram or trigram into all its morphological forms. These forms are then submitted as literal queries, and the resulting hits are summed up. John Carroll’s suite of morphological tools12 is used to generate inflected forms of verbs and nouns. All the search terms are performed as exact matches by using quotation marks and submitted to the search engines in lower case. We measured the performance of the parsers using the following metrics: unlabeled attachment score (UAS), labeled attachment score (LAS) and complete match (CM), which were defined by Hall et al. (2006). All the metrics are calculated as mean scores per word, and punctuation tokens are consistently excluded. 4.1 Main results There are some clear trends in the results of Table 3. First, performance increases with the order of the parser: edge-factored model (dep1) has the lowest performance, adding sibling and grandchild relationships (dep2) significantly increases performance. Similar observations regarding the effect of model order have also been made by Carreras (2007) and Koo et al. (2008). Second, note that the parsers incorporating the Ngram feature sets consistently outperform the models using the baseline features in all test data sets, regardless of model order or label usage. Another 8http://w3.msi.vxu.se/ nivre/research/MaltXML.html 9We removed a single 249-word sentence from Section 0 for computational reasons. 10http://www.inf.ed.ac.uk/resources/nlp/local doc/MXPOST.html 11http://www.google.com/ 12http://www.cogs.susx.ac.uk/lab/nlp/carroll/morph.html. 1560 Sec dep1 +hits +V1 dep2 +hits +V1 dep1-L +hits-L +V1-L dep2-L +hits-L +V1-L 00 90.39 90.94 90.91 91.56 92.16 92.16 90.11 90.69 90.67 91.94 92.47 92.42 01 91.01 91.60 91.60 92.27 92.89 92.86 90.77 91.39 91.39 91.81 92.38 92.37 23 90.82 91.46 91.39 91.98 92.64 92.59 90.30 90.98 90.92 91.24 91.83 91.77 24 89.53 90.15 90.13 90.81 91.44 91.41 89.42 90.03 90.02 90.30 90.91 90.89 Table 3: Unlabeled accuracies (UAS) and labeled accuracies (LAS) on Section 0, 1, 23, 24. Abbreviation: dep1/dep2=first-order parser and second-order parser with the baseline features; +hits=N-gram features derived from the Google hits; +V1=N-gram features derived from the Google V1; suffix-L=labeled parser. Unlabeled parsers are scored using unlabeled parent predictions, and labeled parsers are scored using labeled parent predictions. finding is that the N-gram features derived from Google hits are slightly better than Google V1 due to the large N-gram coverage, we will discuss later. As a final note, all the comparisons between the integration of N-gram features and the baseline features in Table 3 are mildly significant using the Z-test of Collins et al. (2005) (p < 0.08). Type Systems UAS CM D Yamada and Matsumoto (2003) 90.3 38.7 McDonald et al. (2005) 90.9 37.5 McDonald and Pereira (2006) 91.5 42.1 Corston-Oliver et al. (2006) 90.9 37.5 Hall et al. (2006) 89.4 36.4 Wang et al. (2007) 89.2 34.4 Carreras et al. (2008) 93.5 GoldBerg and Elhadad (2010)† 91.32 40.41 Ours 92.64 46.61 C Nivre and McDonald (2008)† 92.12 44.37 Martins et al. (2008)† 92.87 45.51 Zhang and Clark (2008) 92.1 45.4 S Koo et al. (2008) 93.16 Suzuki et al. (2009) 93.79 Chen et al. (2009) 93.16 47.15 Table 4: Comparison of our final results with other bestperforming systems on the whole Section 23. Type D, C and S denote discriminative, combined and semisupervised systems, respectively. † These papers were not directly reported the results on this data set, we implemented the experiments in this paper. To put our results in perspective, we also compare them with other best-performing systems in Table 4. To facilitate comparisons with previous work, we only use Section 23 as the test data. The results show that our second order model incorporating the N-gram features (92.64) performs better than most previously reported discriminative systems trained on the Treebank. Carreras et al. (2008) reported a very high accuracy using information of constituent structure of TAG grammar formalism, while in our system, we do not use such knowledge. When compared to the combined systems, our system is better than Nivre and McDonald (2008) and Zhang and Clark (2008), but a slightly worse than Martins et al. (2008). We also compare our method with the semi-supervised approaches, the semi-supervised approaches achieved very high accuracies by leveraging on large unlabeled data directly into the systems for joint learning and decoding, while in our method, we only explore the Ngram features to further improve supervised dependency parsing performance. Table 5 shows the details of some other N-gram sources, where NEWS: created from a large set of news articles including the Reuters and Gigword (Graff, 2003) corpora. For a given number of unique N-gram, using any of these sources does not have significant difference in Figure 3. Google hits is the largest N-gram data and shows the best performance. The other two are smaller ones, accuracies increase linearly with the log of the number of types in the auxiliary data set. Similar observations have been made by Pitler et al. (2010). We see that the relationship between accuracy and the number of Ngram is not monotonic for Google V1. The reason may be that Google V1 does not make detailed preprocessing, containing many mistakes in the corpus. Although Google hits is noisier, it has very much larger coverage of bigrams or trigrams. Some previous studies also found a log-linear relationship between unlabeled data (Suzuki and Isozaki, 2008; Suzuki et al., 2009; Bergsma et al., 2010; Pitler et al., 2010). We have shown that this trend continues well for dependency parsing by using web-scale data (NEWS and Google V1). 13Google indexes about more than 8 billion pages and each contains about 1,000 words on average. 1561 Corpus # of tokens θ # of types NEWS 3.2B 1 3.7B Google V1 1,024.9B 40 3.4B Google hits13 8,000B 100 Table 5: N-gram data, with total number of words in the original corpus (in billions, B). Following (Brants and Franz, 2006; Pitler et al., 2010), we set the frequency threshold to filter the data θ, and total number of unique N-gram (types) remaining in the data. 1e4 1e5 1e6 1e7 1e8 1e9 91.9 92 92.1 92.2 92.3 92.4 92.5 92.6 92.7 Number of Unique N-grams UAS Score (%) NEWS Google V1 Google hits Figure 3: There is no data like more data. UAS accuracy improves with the number of unique N-grams but still lower than the Google hits. 4.2 Improvement relative to dependency length The experiments in (McDonald and Nivre, 2007) showed a negative impact on the dependency parsing performance from too long dependencies. For our proposed approach, the improvement relative to dependency length is shown in Figure 4. From the Figure, it is seen that our method gives observable better performance when dependency lengths are larger than 3. The results here show that the proposed approach improves the dependency parsing performance, particularly for long dependency relationships. 4.3 Cross-genre testing In this section, we present the experiments to validate the robustness the web-derived selectional preferences. The intent is to understand how well the web-derived selectional preferences transfer to other sources. The English experiment evaluates the performance of our proposed approach when it is trained 1 10 20 30 0.75 0.8 0.85 0.9 0.95 1 Dependency Length F1 Score (%) MST2 MST2+N-gram Figure 4: Dependency length vs. F1 score. on annotated data from one genre of text (WSJ) and is used to parse a test set from a different genre: the biomedical domain related to cancer (PennBioIE., 2005) with 2,600 parsed sentences. We divided the data into 500 for training, 100 for development and others for testing. We created five sets of training data with 100, 200, 300, 400, and 500 sentences respectively. Figure 5 plots the UAS accuracy as function of training instances. WSJ is the performance of our second-order dependency parser trained on section 2-21; WSJ+N-gram is the performance of our proposed approach trained on section 2-21; WSJ+BioMed is the performance of the parser trained on WSJ and biomedical data. WSJ+BioMed+N-gram is the performance of our proposed approach trained on WSJ and biomedical data. The results show that incorporating the webscale N-gram features can significantly improve the dependency parsing performance, and the improvement is much larger than the in-domain testing presented in Section 4.1, the reason may be that webderived N-gram features do not depend directly on training data and thus work better on new domains. 4.4 Discussion In this paper, we present a novel method to improve dependency parsing by using web-scale data. Despite the success, there are still some problems which should be discussed. (1) Google hits is less sparse than Google V1 in modeling the word-to-word relationships, but Google hits are likely to be noisier than Google V1. It is very appealing to carry out a correlation anal1562 100 150 200 250 300 350 400 450 500 80 81 82 83 84 85 86 87 88 UAS Score (%) WSJ WSJ+N-gram WSJ+BioMed WSJ+BioMed+N-gram Figure 5: Adapting a WSJ parser to biomedical text. WSJ: performance of parser trained only on WSJ; WSJ+N-gram: performance of our proposed approach trained only on WSJ; WSJ+BioMed: parser trained on WSJ and biomedical text; WSJ+BioMed+N-gram: our approach trained on WSJ and biomedical text. ysis to determine whether Google hits and Google V1 are highly correlated. We will leave it for future research. (2) Veronis (2005) pointed out that there had been a debate about reliability of Google hits due to the inconsistencies of page hits estimates. However, this estimate is scale-invariant. Assume that when the number of pages indexed by Google grows, the number of pages containing a given search term goes to a fixed fraction. This means that if pages indexed by Google doubles, then so do the bigrams or trigrams frequencies. Therefore, the estimate becomes stable when the number of indexed pages grows unboundedly. Some details are presented in Cilibrasi and Vitanyi (2007). 5 Related Work Our approach is to exploit web-derived selectional preferences to improve the dependency parsing. The idea of this paper is inspired by the work of Suzuki et al. (2009) and Pitler et al. (2010). The former uses the web-scale data explicitly to create more data for training the model; while the latter explores the webscale N-grams data (Lin et al., 2010) for compound bracketing disambiguation. Our research, however, applies the web-scale data (Google hits and Google V1) to model the word-to-word dependency relationships rather than compound bracketing disambiguation. Several previous studies have exploited the webscale data for word pair acquisition. Keller and Lapata (2003) evaluated the utility of using web search engine statistics for unseen bigram. Nakov and Hearst (2005) demonstrated the effectiveness of using search engine statistics to improve the noun compound bracketing. Volk (2001) exploited the WWW as a corpus to resolve PP attachment ambiguities. Turney (2007) measured the semantic orientation for sentiment classification using co-occurrence statistics obtained from the search engines. Bergsma et al. (2010) created robust supervised classifiers via web-scale N-gram data for adjective ordering, spelling correction, noun compound bracketing and verb part-of-speech disambiguation. Our approach, however, extends these techniques to dependency parsing, particularly for long dependency relationships, which involves more challenging tasks than the previous work. Besides, there are some work exploring the wordto-word co-occurrence derived from the web-scale data or a fixed size of corpus (Calvo and Gelbukh, 2004; Calvo and Gelbukh, 2006; Yates et al., 2006; Drabek and Zhou, 2000; van Noord, 2007) for PP attachment ambiguities or shallow parsing. Johnson and Riezler (2000) incorporated the lexical selectional preference features derived from British National Corpus (Graff, 2003) into a stochastic unification-based grammar. Abekawa and Okumura (2006) improved Japanese dependency parsing by using the co-occurrence information derived from the results of automatic dependency parsing of large-scale corpora. However, we explore the webscale data for dependency parsing, the performance improves log-linearly with the number of parameters (unique N-grams). To the best of our knowledge, web-derived selectional preference has not been successfully applied to dependency parsing. 6 Conclusion In this paper, we present a novel method which incorporates the web-derived selectional preferences to improve statistical dependency parsing. The results show that web-scale data improves the dependency parsing, particularly for long dependency relationships. There is no data like more data, performance improves log-linearly with the num1563 ber of parameters (unique N-grams). More importantly, when operating on new domains, the webderived selectional preferences show great potential for achieving robust performance. Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 60875041 and No. 61070106), and CSIDM project (No. CSIDM200805) partially funded by a grant from the National Research Foundation (NRF) administered by the Media Development Authority (MDA) of Singapore. We thank the anonymous reviewers for their insightful comments. References T. Abekawa and M. Okumura. 2006. Japanese dependency parsing using co-occurrence information and a combination of case elements. In Proceedings of ACLCOLING. S. Bergsma, D. Lin, and R. Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proceedings of EMNLP, pages 59-68. S. Bergsma, E. Pitler, and D. Lin. 2010. Creating robust supervised classifier via web-scale N-gram data. In Proceedings of ACL. T. Brants and Alex Franz. 2006. The Google Web 1T 5-gram Corpus Version 1.1. LDC2006T13. H. Calvo and A. Gelbukh. 2004. Acquiring selectional preferences from untagged text for prepositional phrase attachment disambiguation. In Proceedings of VLDB. H. Calvo and A. Gelbukh. 2006. DILUCT: An opensource Spanish dependency parser based on rules, heuristics, and selectional preferences. In Lecture Notes in Computer Science 3999, pages 164-175. X. Carreras. 2007. Experiments with a higher-order projective dependency parser. In Proceedings of EMNLPCoNLL, pages 957-961. X. Carreras, M. Collins, and T. Koo. 2008. TAG, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of CoNLL. E. Charniak, D. Blaheta, N. Ge, K. Hall, and M. Johnson. 2000. BLLIP 1987-89 WSJ Corpus Release 1, LDC No. LDC2000T43.Linguistic Data Consortium. W. Chen, D. Kawahara, K. Uchimoto, and Torisawa. 2009. Improving dependency parsing with subtrees from auto-parsed data. In Proceedings of EMNLP, pages 570-579. K. W. Church and P. Hanks. 1900. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29. R. L. Cilibrasi and P. M. B. Vitanyi. 2007. The Google similarity distance. IEEE Transaction on Knowledge and Data Engineering, 19(3):2007. pages 370-383. M. Collins, A. Globerson, T. Koo, X. Carreras, and P. L. Bartlett. 2008. Exponentiated gradient algorithm for conditional random fields and max-margin markov networks. Journal of Machine Learning Research, pages 1775–1822. M. Collins, P. Koehn, and I. Kucerova. 2005. Clause restructuring for statistical machine translation. In Proceedings of ACL, pages 531-540. S. Corston-Oliver, A. Aue, Kevin. Duh, and E. Ringger. 2006. Multilingual dependency parsing using bayes point machines. In Proceedings of NAACL. H. Daum´e III. 2007. Frustrating easy domain adaptation. In Proceedings of ACL. E. F. Drabek and Q. Zhou. 2000. Using co-occurrence statistics as an information source for partial parsing of Chinese. In Proceedings of Second Chinese Language Processing Workshop, ACL, pages 22-28. Y. GoldBerg and M. Elhadad. 2010. An efficient algorithm for easy-first non-directional dependency parsing. In Proceedings of NAACL, pages 742-750. D. Graff. 2003. English Gigaword, LDC2003T05. J. Hall, J. Nivre, and J. Nilsson. 2006. Discriminative classifier for deterministic dependency parsing. In Proceedings of ACL, pages 316-323. M. Johnson and S. Riezler. 2000. Exploiting auxiliary distribution in stochastic unification-based garmmars. In Proceedings of NAACL. T. Koo, X. Carreras, and M. Collins. 2008. Simple semi-supervised dependency parsing. In Proceedings of ACL, pages 595-603. F. Keller and M. Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3):459-484. M. Lapata and F. Keller. 2005. Web-based models for natural language processing. ACM Transactions on Speech and Language Processing, 2(1), pages 1-30. M. Lauer. 1995. Corpus statistics meet the noun compound: some empirical results. In Proceedings of ACL. D. K. Lin, H. Church, S. Ji, S. Sekine, D. Yarowsky, S. Bergsma, K. Patil, E. Pitler, E. Lathbury, V Rao, K. Dalwani, and S. Narsale. 2010. New tools for webscale n-grams. In Proceedings of LREC. M.P. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics. 1564 A. F. T. Martins, D. Das, N. A. Smith, and E. P. Xing. 2008. Stacking dependency parsers. In Proceedings of EMNLP, pages 157-166. D. McClosky, E. Charniak, and M. Johnson. 2006. Reranking and self-training for parser adaptation. In Proceedings of ACL. D. McClosky, E. Charniak, and M. Johnson. 2010. Automatic Domain Adapatation for Parsing. In Proceedings of NAACL-HLT. R. McDonald and J. Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of EMNLP-CoNLL. R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of EACL, pages 81-88. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL, pages 91-98. P. Nakov and M. Hearst. 2005. Search engine statistics beyond the n-gram: application to noun compound bracketing. In Proceedings of CoNLL. J. Nivre and R. McDonald. 2008. Integrating graphbased and transition-based dependency parsers. In Proceedings of ACL, pages 950-958. G. van Noord. 2007. Using self-trained bilexical preferences to improve disambiguation accuracy. In Proceedings of IWPT, pages 1-10. PennBioIE. 2005. Mining the bibliome project, 2005. http:bioie.ldc.upenn.edu/. E. Pitler, S. Bergsma, D. Lin, and K. Church. 2010. Using web-scale N-grams to improve base NP parsing performance. In Proceedings of COLING, pages 886894. P. Resnik. 1993. Selection and information: a classbased approach to lexical relationships. Ph.D. thesis, University of Pennsylvania. J. Suzuki, H. Isozaki, X. Carreras, and M. Collins. 2009. An empirical study of semi-supervised structured conditional models for dependency parsing. In Proceedings of EMNLP, pages 551-560. J. Suzuki and H. Isozaki. 2008. Semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. In Proceedings of ACL, pages 665673. P. D. Turney. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems, 21(4). J. Veronis. 2005. Web: Google adjusts its counts. Jean Veronis’ blog: http://aixtal.blogsplot.com/2005/03/ web-google-adjusts-its-count.html. M. Volk. 2001. Exploiting the WWW as corpus to resolve PP attachment ambiguities. In Proceedings of the Corpus Linguistics. Q. I. Wang, D. Lin, and D. Schuurmans. 2007. Simple training of dependency parsers via structured boosting. In Proceedings of IJCAI, pages 1756-1762. Yamada and Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, pages 195-206. A. Yates, S. Schoenmackers, and O. Etzioni. 2006. Detecting parser errors using web-based semantic filters. In Proceedings of EMNLP, pages 27-34. Y. Zhang and S. Clark. 2008. A tale of two parsers: investigating and combining graph-based and transitionbased dependency parsing using beam-search. In Proceedings of EMNLP, pages 562-571. 1565
2011
156